Datasets:

License:
File size: 2,644 Bytes
627c792
 
d205022
 
627c792
7b410ee
 
 
 
ab40e3b
 
 
7b410ee
 
 
ab40e3b
 
 
 
 
 
 
 
 
 
 
7b410ee
d205022
 
7b410ee
 
 
 
 
 
 
 
 
 
ad72d35
7b410ee
 
 
 
 
 
 
 
 
 
 
 
 
9592098
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: cc-by-sa-4.0
size_categories:
- 10B<n<100B
---
# XLM-R-BERTić dataset

## Composition and usage

This dataset contains 11.5 billion words of texts written in Croatian, Bosnian, Montenegrin and Serbian.

It is an extension of the [BERTić-data dataset](http://hdl.handle.net/11356/1426), a 8.4 billion-words collection used to pre-train the [BERTić model](https://huggingface.co/classla/bcms-bertic) ([paper](https://aclanthology.org/2021.bsnlp-1.5.pdf)). In this dataset there are two major additions: the MaCoCu HBS crawling collection, a collection of crawled news items, and the [mC4](https://huggingface.co/datasets/mc4) HBS dataset. The order of deduplication is as stated in the list of parts/splits:
* macocu_hbs
* hr_news
* mC4
* BERTić-data
  * hrwac
  * classla_hr
  * cc100_hr
  * riznica
  * srwac
  * classla_sr
  * cc100_sr
  * bswac
  * classla_bs
  * cnrwac

The dataset was deduplicated with `onion` on the basis of 5-tuples of words with duplicate threshold set to 90%.

The entire dataset can be downloaded and used as follows:
```python
import datasets
dict_of_datasets = datasets.load_dataset("classla/xlm-r-bertic-data")
full_dataset = datasets.concatenate_datasets([d for d in dict_of_datasets.values()])
```

A single split can be taken as well, but note that this means all the splits will be downloaded and generated, which can take a long time:
```python
import datasets
riznica = datasets.load_dataset("classla/xlm-r-bertic-data", split="riznica")
```

To circumvent this one option is using streaming:

```python
import datasets
riznica = datasets.load_dataset("classla/xlm-r-bertic-data", split="riznica", streaming=True)
for i in riznica.take(2):
    print(i)
# Output:
# {'text': 'PRAGMATIČARI DOGMATI SANJARI'}
# {'text': 'Ivica Župan'}
```
Read more on streaming [here](https://huggingface.co/docs/datasets/stream).

If you use this dataset, please cite

```
@inproceedings{ljubesic-etal-2024-language,
    title = "Language Models on a Diet: Cost-Efficient Development of Encoders for Closely-Related Languages via Additional Pretraining",
    author = "Ljube{\v{s}}i{\'c}, Nikola  and
      Suchomel, V{\'\i}t  and
      Rupnik, Peter  and
      Kuzman, Taja  and
      van Noord, Rik",
    editor = "Melero, Maite  and
      Sakti, Sakriani  and
      Soria, Claudia",
    booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://aclanthology.org/2024.sigul-1.23",
    pages = "189--203",
}
```