Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Turkish
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
meliksahturker commited on
Commit
ee5c620
1 Parent(s): 376c069

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -25,7 +25,7 @@ language:
25
 
26
  # Dataset Card for Dataset Name
27
  vngrs-web-corpus is a mixed-dataset made of cleaned Turkish sections of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4).
28
- This dataset originally created for training [VBART](https://arxiv.org/abs/2403.01308) and later used for training [TURNA](https://arxiv.org/abs/2401.14373).
29
  The cleaning procedures of this dataset are explained in Appendix A of the [VBART Paper](https://arxiv.org/abs/2401.14373).
30
  It consists of 50.3M pages and 25.33B tokens when tokenized by VBART Tokenizer.
31
 
 
25
 
26
  # Dataset Card for Dataset Name
27
  vngrs-web-corpus is a mixed-dataset made of cleaned Turkish sections of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4).
28
+ This dataset is originally created for training [VBART](https://arxiv.org/abs/2403.01308) and later used for training [TURNA](https://arxiv.org/abs/2401.14373).
29
  The cleaning procedures of this dataset are explained in Appendix A of the [VBART Paper](https://arxiv.org/abs/2401.14373).
30
  It consists of 50.3M pages and 25.33B tokens when tokenized by VBART Tokenizer.
31