Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -405,7 +405,7 @@ configs:
|
|
405 |
|
406 |
π FineWeb-Edu dataset consists of **1.3T tokens** ([FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)) and **5.4T tokens** of educational web pages filtered from π· FineWeb dataset. This is the 5.4 trillion version.
|
407 |
|
408 |
-
|
409 |
|
410 |
To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
|
411 |
|
@@ -413,7 +413,7 @@ The [Dataset Curation](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu
|
|
413 |
|
414 |
## What is being released?
|
415 |
|
416 |
-
Along with the dataset, which includes all filtered CommonCrawl dumps since 2013, we also release the educational classifier used for the filtering as well as the code for training it and running inference at:
|
417 |
|
418 |
## How to load the dataset
|
419 |
Similarily to FineWeb, You can load the full dataset or a specific crawl/dump. Dumps have the format `CC-MAIN-(year)-(week number)`.
|
|
|
405 |
|
406 |
π FineWeb-Edu dataset consists of **1.3T tokens** ([FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)) and **5.4T tokens** of educational web pages filtered from π· FineWeb dataset. This is the 5.4 trillion version.
|
407 |
|
408 |
+
### Note: this version uses a lower educational score threshold = 2, which results in more coverage, but lower quality documents.
|
409 |
|
410 |
To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
|
411 |
|
|
|
413 |
|
414 |
## What is being released?
|
415 |
|
416 |
+
Along with the dataset, which includes all filtered CommonCrawl dumps since 2013, we also release the educational classifier used for the filtering as well as the code for training it and running inference at: https://github.com/huggingface/cosmopedia/tree/main/classification.
|
417 |
|
418 |
## How to load the dataset
|
419 |
Similarily to FineWeb, You can load the full dataset or a specific crawl/dump. Dumps have the format `CC-MAIN-(year)-(week number)`.
|