Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -405,7 +405,7 @@ configs:
|
|
405 |
|
406 |
π FineWeb-Edu dataset consists of **1.3T tokens** ([FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)) and **5.4T tokens** of educational web pages filtered from π· FineWeb dataset. This is the 5.4 trillion version.
|
407 |
|
408 |
-
### Note: this version uses a lower educational score threshold = 2, which results in more
|
409 |
|
410 |
To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
|
411 |
|
@@ -475,23 +475,19 @@ We fine-tuned a Bert-like regression model using these annotations, based on [Sn
|
|
475 |
|
476 |
The classifier is available at: [https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/ ](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/)
|
477 |
|
478 |
-
### Filtering
|
479 |
-
|
480 |
-
TODO: add ablation results
|
481 |
-
|
482 |
-
We release these two dataset as [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) and [FineWeb-Edu-score-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2) along with the classifier.
|
483 |
-
|
484 |
-
## Dataset performance evaluation and ablations
|
485 |
|
486 |
-
We
|
487 |
|
488 |
-
|
489 |
-
|
490 |
-
|
|
|
491 |
|
492 |
-
|
493 |
|
494 |
-
You will find
|
495 |
|
496 |
## Considerations for Using the Data
|
497 |
This section is copied from the parent dataset: [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
|
|
|
405 |
|
406 |
π FineWeb-Edu dataset consists of **1.3T tokens** ([FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)) and **5.4T tokens** of educational web pages filtered from π· FineWeb dataset. This is the 5.4 trillion version.
|
407 |
|
408 |
+
### Note: this version uses a lower educational score threshold = 2, which results in more documents, but lower quality compared to the 1.3T version. For more details check the FineWeb [blog post](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1).
|
409 |
|
410 |
To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
|
411 |
|
|
|
475 |
|
476 |
The classifier is available at: [https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/ ](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/)
|
477 |
|
478 |
+
### Filtering and results
|
479 |
+
**Note**: You can find more details about the ablations and results in the FineWeb blog post (TODO).
|
|
|
|
|
|
|
|
|
|
|
480 |
|
481 |
+
We investigated the impact of using different thresholds for the filtering and found that threshold 3 gave the best overall results. Although using a threshold higher than 3 improves performance on knowledge and reasoning intensive benchmarks, it significantly degrades performance on HellaSwag and PIQA.
|
482 |
|
483 |
+
We then built π FineWeb-Edu by filtering out samples with scores lower than 3. This removed 92% of the dataset, leaving us with 1.3T educational tokens. Our ablation demonstrated that this refined dataset surpasses π· FineWeb and all other open web datasets, with remarkable improvements on educational benchmarks such as MMLU, ARC, and OpenBookQA. To retain more tokens, we also experimented with a less strict threshold of 2 instead of 3. While being less performant than using threshold 3, it still outperformed FineWeb and it preserved 5.4T tokens.
|
484 |
+
The plot below compare FineWeb-Edu to other web datasets:
|
485 |
+
|
486 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/hJlyTgDzZpYuxO9LUm0PF.png)
|
487 |
|
488 |
+
We release these two dataset as [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) and [FineWeb-Edu-score-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2) along with the [classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier).
|
489 |
|
490 |
+
You will find all the ablation models in [this collection](https://huggingface.co/collections/HuggingFaceFW/ablation-models-662457b0d213e8c14fe47f32). The FineWeb-Edu ablation model (trained on 350B tokens) is available at [https://huggingface.co/HuggingFaceFW/ablation-model-fineweb-edu](https://huggingface.co/HuggingFaceFW/ablation-model-fineweb-edu).
|
491 |
|
492 |
## Considerations for Using the Data
|
493 |
This section is copied from the parent dataset: [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
|