Datasets:

License:

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

XLM-R-BERTić dataset

Composition and usage

This dataset contains 11.5 billion words of texts written in Croatian, Bosnian, Montenegrin and Serbian.

It is an extension of the BERTić-data dataset, a 8.4 billion-words collection used to pre-train the BERTić model (paper). In this dataset there are two major additions: the MaCoCu HBS crawling collection, a collection of crawled news items, and the mC4 HBS dataset. The order of deduplication is as stated in the list of parts/splits:

  • macocu_hbs
  • hr_news
  • mC4
  • BERTić-data
    • hrwac
    • classla_hr
    • cc100_hr
    • riznica
    • srwac
    • classla_sr
    • cc100_sr
    • bswac
    • classla_bs
    • cnrwac

The dataset was deduplicated with onion on the basis of 5-tuples of words with duplicate threshold set to 90%.

The entire dataset can be downloaded and used as follows:

import datasets
dict_of_datasets = datasets.load_dataset("classla/xlm-r-bertic-data")
full_dataset = datasets.concatenate_datasets([d for d in dict_of_datasets.values()])

A single split can be taken as well, but note that this means all the splits will be downloaded and generated, which can take a long time:

import datasets
riznica = datasets.load_dataset("classla/xlm-r-bertic-data", split="riznica")

To circumvent this one option is using streaming:

import datasets
riznica = datasets.load_dataset("classla/xlm-r-bertic-data", split="riznica", streaming=True)
for i in riznica.take(2):
    print(i)
# Output:
# {'text': 'PRAGMATIČARI DOGMATI SANJARI'}
# {'text': 'Ivica Župan'}

Read more on streaming here.

If you use this dataset, please cite

@inproceedings{ljubesic-etal-2024-language,
    title = "Language Models on a Diet: Cost-Efficient Development of Encoders for Closely-Related Languages via Additional Pretraining",
    author = "Ljube{\v{s}}i{\'c}, Nikola  and
      Suchomel, V{\'\i}t  and
      Rupnik, Peter  and
      Kuzman, Taja  and
      van Noord, Rik",
    editor = "Melero, Maite  and
      Sakti, Sakriani  and
      Soria, Claudia",
    booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://aclanthology.org/2024.sigul-1.23",
    pages = "189--203",
}
Downloads last month
68

Models trained or fine-tuned on classla/xlm-r-bertic-data