Datasets:

Modalities:
Tabular
Text
ArXiv:
DOI:
License:
guipenedo HF staff commited on
Commit
e76742d
1 Parent(s): c8e8c9c

updated point of contact

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -20324,7 +20324,7 @@ In total, we provide filtered data for **1,893 language-script pairs**. Of these
20324
 
20325
  While we tried our best to not overfilter, we know that our filtering isn't perfect, and wanted to allow the community to **easily re-filter the data with their own filtering criteria**. We have therefore also uploaded the data that was **removed** by our filtering pipeline for each language (it is suffixed by `_removed`). The _filtered + the removed subsets_ of each language represent the entire data for a given language following global deduplication, which means that you do not have to re-deduplicate it yourself. You can find and adapt our filtering [code here](https://github.com/huggingface/fineweb-2/blob/main/fineweb-2-pipeline.py).
20326
 
20327
- Additionally, we also uploaded data for scripts that the language classifier does not support, without any deduplication or filtering. These are prefixed by `und_`.
20328
 
20329
  The following table shows the size of the filtering subset for the biggest 80 languages. Feel free to expand the _details_ below for the full list.
20330
 
@@ -24634,7 +24634,7 @@ Expand each individual language to see the corresponding plot. The error bars co
24634
  ## Dataset Description
24635
 
24636
  - **Homepage and Repository:** [https://huggingface.co/datasets/HuggingFaceFW/fineweb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)
24637
- - **Point of Contact:** please create a discussion on the Community tab
24638
  - **License:** Open Data Commons Attribution License (ODC-By) v1.0
24639
 
24640
  ### Dataset Summary
 
20324
 
20325
  While we tried our best to not overfilter, we know that our filtering isn't perfect, and wanted to allow the community to **easily re-filter the data with their own filtering criteria**. We have therefore also uploaded the data that was **removed** by our filtering pipeline for each language (it is suffixed by `_removed`). The _filtered + the removed subsets_ of each language represent the entire data for a given language following global deduplication, which means that you do not have to re-deduplicate it yourself. You can find and adapt our filtering [code here](https://github.com/huggingface/fineweb-2/blob/main/fineweb-2-pipeline.py).
20326
 
20327
+ Additionally, we also uploaded data for scripts that the language classifier does not support or in a supported script but unknown language, without any deduplication or filtering. These are prefixed by `und_`.
20328
 
20329
  The following table shows the size of the filtering subset for the biggest 80 languages. Feel free to expand the _details_ below for the full list.
20330
 
 
24634
  ## Dataset Description
24635
 
24636
  - **Homepage and Repository:** [https://huggingface.co/datasets/HuggingFaceFW/fineweb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)
24637
+ - **Point of Contact:** https://huggingface.co/spaces/HuggingFaceFW/discussion
24638
  - **License:** Open Data Commons Attribution License (ODC-By) v1.0
24639
 
24640
  ### Dataset Summary