--- task_categories: - text-generation - fill-mask language: - cs pretty_name: BUT-LCC size_categories: - 10BLatest Updates - 06/05/2024 We released small manually annotated [dataset of adult content](https://huggingface.co/datasets/BUT-FIT/adult_content_classifier_dataset). We used classifier trained on this dataset for filtering our corpus. ## Data Sources
Part GB of text GB of titles %
CulturaX 157.79 3.85 49
TenTen-cs-2017 48.97 0.95 15
BUT_Crawl 25.15 0.8 8
cswiki-20230101 1.05 0.01 0
historical 13.47 0.00 4
hplt 65.55 3.20 21
idnes_comments 7.38 0.03 2
Sum 319.36 8.84
## Format The corpus consists of train and test splits. It uses jsonl format, which means that every sample is JSON on its own line. ### Sample Format ```json { "id": unique identifier, "part": original source, "title": source document title, "text": the context, "ugly": (type: bool) inappropriate content, "ugly_score": (type: float) score from SVM classifier that filters inappropriate content } ``` # License Information - We do not own any of the text from which these text data has been extracted. - We license the actual packaging of these text data under the Creative Commons CC0 license ("no rights reserved"). Detailed licensing information for contained corpora (not crawled by us) is below. | Corpus | Licensing Information| |-----------------|----------------| | CulturaX | [uonlp/CulturaX](https://huggingface.co/datasets/uonlp/CulturaX#license-information) | | TenTen-cs-2017 | [NLP Centre Web Corpus License Agreement](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-4835) | | Czech Wikipedia | [CC BY-SA 4.0 DEED](https://creativecommons.org/licenses/by-sa/4.0/deed.en) | | Historical | OCR'd documents since 1850, publicly available from the [Czech Digital Library](https://www.digitalniknihovna.cz/) | | HPLT | [https://hplt-project.org/datasets/v1.2](https://hplt-project.org/datasets/v1.2) | ## Our Models Linked to This Dataset - [BUT-FIT/CSMPT7B](https://huggingface.co/BUT-FIT/csmpt7b) - [BUT-FIT/CSTinyLlama-1.2B](https://huggingface.co/BUT-FIT/CSTinyLlama-1.2B) - [BUT-FIT/Czech-GPT-2-XL-133k](https://huggingface.co/BUT-FIT/Czech-GPT-2-XL-133k) ## Statistics
Split Samples
Train 176 780 582
Test 20 000
## ID 2 URL mapping If you need to recover original webpages, we provide ID to source URL mapping where possible in id2url.csv file. # Acknowledgement This work was supported by NAKI III program of Ministry of Culture Czech Republic, project semANT --- "Sémantický průzkumník textového kulturního dědictví" grant no. `DH23P03OVV060` and by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:`90254`). # Contributors - [Jan Doležal](https://www.fit.vut.cz/person/idolezal/.en) developed cleaning pipeline for text processing, collected data for cleaning, and analyzed cutoff threshold for pruning. - [Martin Dočkal](https://www.fit.vut.cz/person/idocekal/.en) uploaded data to Huggingface, and helped with cutoff analysis. - [Martin Fajčík](https://mfajcik.github.io/) reviewed existing corpora, planned pipeline steps, processed data for LM training, and verified their usefullness. - [Martin Kišš](https://www.fit.vut.cz/person/ikiss/.en) downloaded historical documents, and ran our PeroOCR on the collection. - [Karel Beneš](https://www.fit.vut.cz/person/ibenes/.en) performed cleaning of historical documents, and created n-gram lm for document filtering. - [Karel Ondřej](https://www.fit.vut.cz/person/ondrej/.en) who wrote a crawler for collecting BUT_Crawl and prepared preliminary clean corpus version. - [Michal Hradiš](https://www.fit.vut.cz/person/ihradis/.en) managed the work, and pushed the members when necessary.