Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
fineweb-edu-score-2 / README.md
anton-l's picture
anton-l HF staff
Update README.md
505673d verified
|
raw
history blame
21.5 kB
metadata
license: odc-by
task_categories:
  - text-generation
language:
  - en
pretty_name: FineWeb-Edu (score >= 2)
size_categories:
  - n>1T
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*/*
  - config_name: CC-MAIN-2024-10
    data_files:
      - split: train
        path: data/CC-MAIN-2024-10/*
  - config_name: CC-MAIN-2023-50
    data_files:
      - split: train
        path: data/CC-MAIN-2023-50/*
  - config_name: CC-MAIN-2023-40
    data_files:
      - split: train
        path: data/CC-MAIN-2023-40/*
  - config_name: CC-MAIN-2023-23
    data_files:
      - split: train
        path: data/CC-MAIN-2023-23/*
  - config_name: CC-MAIN-2023-14
    data_files:
      - split: train
        path: data/CC-MAIN-2023-14/*
  - config_name: CC-MAIN-2023-06
    data_files:
      - split: train
        path: data/CC-MAIN-2023-06/*
  - config_name: CC-MAIN-2022-49
    data_files:
      - split: train
        path: data/CC-MAIN-2022-49/*
  - config_name: CC-MAIN-2022-40
    data_files:
      - split: train
        path: data/CC-MAIN-2022-40/*
  - config_name: CC-MAIN-2022-33
    data_files:
      - split: train
        path: data/CC-MAIN-2022-33/*
  - config_name: CC-MAIN-2022-27
    data_files:
      - split: train
        path: data/CC-MAIN-2022-27/*
  - config_name: CC-MAIN-2022-21
    data_files:
      - split: train
        path: data/CC-MAIN-2022-21/*
  - config_name: CC-MAIN-2022-05
    data_files:
      - split: train
        path: data/CC-MAIN-2022-05/*
  - config_name: CC-MAIN-2021-49
    data_files:
      - split: train
        path: data/CC-MAIN-2021-49/*
  - config_name: CC-MAIN-2021-43
    data_files:
      - split: train
        path: data/CC-MAIN-2021-43/*
  - config_name: CC-MAIN-2021-39
    data_files:
      - split: train
        path: data/CC-MAIN-2021-39/*
  - config_name: CC-MAIN-2021-31
    data_files:
      - split: train
        path: data/CC-MAIN-2021-31/*
  - config_name: CC-MAIN-2021-25
    data_files:
      - split: train
        path: data/CC-MAIN-2021-25/*
  - config_name: CC-MAIN-2021-21
    data_files:
      - split: train
        path: data/CC-MAIN-2021-21/*
  - config_name: CC-MAIN-2021-17
    data_files:
      - split: train
        path: data/CC-MAIN-2021-17/*
  - config_name: CC-MAIN-2021-10
    data_files:
      - split: train
        path: data/CC-MAIN-2021-10/*
  - config_name: CC-MAIN-2021-04
    data_files:
      - split: train
        path: data/CC-MAIN-2021-04/*
  - config_name: CC-MAIN-2020-50
    data_files:
      - split: train
        path: data/CC-MAIN-2020-50/*
  - config_name: CC-MAIN-2020-45
    data_files:
      - split: train
        path: data/CC-MAIN-2020-45/*
  - config_name: CC-MAIN-2020-40
    data_files:
      - split: train
        path: data/CC-MAIN-2020-40/*
  - config_name: CC-MAIN-2020-34
    data_files:
      - split: train
        path: data/CC-MAIN-2020-34/*
  - config_name: CC-MAIN-2020-29
    data_files:
      - split: train
        path: data/CC-MAIN-2020-29/*
  - config_name: CC-MAIN-2020-24
    data_files:
      - split: train
        path: data/CC-MAIN-2020-24/*
  - config_name: CC-MAIN-2020-16
    data_files:
      - split: train
        path: data/CC-MAIN-2020-16/*
  - config_name: CC-MAIN-2020-10
    data_files:
      - split: train
        path: data/CC-MAIN-2020-10/*
  - config_name: CC-MAIN-2020-05
    data_files:
      - split: train
        path: data/CC-MAIN-2020-05/*
  - config_name: CC-MAIN-2019-51
    data_files:
      - split: train
        path: data/CC-MAIN-2019-51/*
  - config_name: CC-MAIN-2019-47
    data_files:
      - split: train
        path: data/CC-MAIN-2019-47/*
  - config_name: CC-MAIN-2019-43
    data_files:
      - split: train
        path: data/CC-MAIN-2019-43/*
  - config_name: CC-MAIN-2019-39
    data_files:
      - split: train
        path: data/CC-MAIN-2019-39/*
  - config_name: CC-MAIN-2019-35
    data_files:
      - split: train
        path: data/CC-MAIN-2019-35/*
  - config_name: CC-MAIN-2019-30
    data_files:
      - split: train
        path: data/CC-MAIN-2019-30/*
  - config_name: CC-MAIN-2019-26
    data_files:
      - split: train
        path: data/CC-MAIN-2019-26/*
  - config_name: CC-MAIN-2019-22
    data_files:
      - split: train
        path: data/CC-MAIN-2019-22/*
  - config_name: CC-MAIN-2019-18
    data_files:
      - split: train
        path: data/CC-MAIN-2019-18/*
  - config_name: CC-MAIN-2019-13
    data_files:
      - split: train
        path: data/CC-MAIN-2019-13/*
  - config_name: CC-MAIN-2019-09
    data_files:
      - split: train
        path: data/CC-MAIN-2019-09/*
  - config_name: CC-MAIN-2019-04
    data_files:
      - split: train
        path: data/CC-MAIN-2019-04/*
  - config_name: CC-MAIN-2018-51
    data_files:
      - split: train
        path: data/CC-MAIN-2018-51/*
  - config_name: CC-MAIN-2018-47
    data_files:
      - split: train
        path: data/CC-MAIN-2018-47/*
  - config_name: CC-MAIN-2018-43
    data_files:
      - split: train
        path: data/CC-MAIN-2018-43/*
  - config_name: CC-MAIN-2018-39
    data_files:
      - split: train
        path: data/CC-MAIN-2018-39/*
  - config_name: CC-MAIN-2018-34
    data_files:
      - split: train
        path: data/CC-MAIN-2018-34/*
  - config_name: CC-MAIN-2018-30
    data_files:
      - split: train
        path: data/CC-MAIN-2018-30/*
  - config_name: CC-MAIN-2018-26
    data_files:
      - split: train
        path: data/CC-MAIN-2018-26/*
  - config_name: CC-MAIN-2018-22
    data_files:
      - split: train
        path: data/CC-MAIN-2018-22/*
  - config_name: CC-MAIN-2018-17
    data_files:
      - split: train
        path: data/CC-MAIN-2018-17/*
  - config_name: CC-MAIN-2018-13
    data_files:
      - split: train
        path: data/CC-MAIN-2018-13/*
  - config_name: CC-MAIN-2018-09
    data_files:
      - split: train
        path: data/CC-MAIN-2018-09/*
  - config_name: CC-MAIN-2018-05
    data_files:
      - split: train
        path: data/CC-MAIN-2018-05/*
  - config_name: CC-MAIN-2017-51
    data_files:
      - split: train
        path: data/CC-MAIN-2017-51/*
  - config_name: CC-MAIN-2017-47
    data_files:
      - split: train
        path: data/CC-MAIN-2017-47/*
  - config_name: CC-MAIN-2017-43
    data_files:
      - split: train
        path: data/CC-MAIN-2017-43/*
  - config_name: CC-MAIN-2017-39
    data_files:
      - split: train
        path: data/CC-MAIN-2017-39/*
  - config_name: CC-MAIN-2017-34
    data_files:
      - split: train
        path: data/CC-MAIN-2017-34/*
  - config_name: CC-MAIN-2017-30
    data_files:
      - split: train
        path: data/CC-MAIN-2017-30/*
  - config_name: CC-MAIN-2017-26
    data_files:
      - split: train
        path: data/CC-MAIN-2017-26/*
  - config_name: CC-MAIN-2017-22
    data_files:
      - split: train
        path: data/CC-MAIN-2017-22/*
  - config_name: CC-MAIN-2017-17
    data_files:
      - split: train
        path: data/CC-MAIN-2017-17/*
  - config_name: CC-MAIN-2017-13
    data_files:
      - split: train
        path: data/CC-MAIN-2017-13/*
  - config_name: CC-MAIN-2017-09
    data_files:
      - split: train
        path: data/CC-MAIN-2017-09/*
  - config_name: CC-MAIN-2017-04
    data_files:
      - split: train
        path: data/CC-MAIN-2017-04/*
  - config_name: CC-MAIN-2016-50
    data_files:
      - split: train
        path: data/CC-MAIN-2016-50/*
  - config_name: CC-MAIN-2016-44
    data_files:
      - split: train
        path: data/CC-MAIN-2016-44/*
  - config_name: CC-MAIN-2016-40
    data_files:
      - split: train
        path: data/CC-MAIN-2016-40/*
  - config_name: CC-MAIN-2016-36
    data_files:
      - split: train
        path: data/CC-MAIN-2016-36/*
  - config_name: CC-MAIN-2016-30
    data_files:
      - split: train
        path: data/CC-MAIN-2016-30/*
  - config_name: CC-MAIN-2016-26
    data_files:
      - split: train
        path: data/CC-MAIN-2016-26/*
  - config_name: CC-MAIN-2016-22
    data_files:
      - split: train
        path: data/CC-MAIN-2016-22/*
  - config_name: CC-MAIN-2016-18
    data_files:
      - split: train
        path: data/CC-MAIN-2016-18/*
  - config_name: CC-MAIN-2016-07
    data_files:
      - split: train
        path: data/CC-MAIN-2016-07/*
  - config_name: CC-MAIN-2015-48
    data_files:
      - split: train
        path: data/CC-MAIN-2015-48/*
  - config_name: CC-MAIN-2015-40
    data_files:
      - split: train
        path: data/CC-MAIN-2015-40/*
  - config_name: CC-MAIN-2015-35
    data_files:
      - split: train
        path: data/CC-MAIN-2015-35/*
  - config_name: CC-MAIN-2015-32
    data_files:
      - split: train
        path: data/CC-MAIN-2015-32/*
  - config_name: CC-MAIN-2015-27
    data_files:
      - split: train
        path: data/CC-MAIN-2015-27/*
  - config_name: CC-MAIN-2015-22
    data_files:
      - split: train
        path: data/CC-MAIN-2015-22/*
  - config_name: CC-MAIN-2015-18
    data_files:
      - split: train
        path: data/CC-MAIN-2015-18/*
  - config_name: CC-MAIN-2015-14
    data_files:
      - split: train
        path: data/CC-MAIN-2015-14/*
  - config_name: CC-MAIN-2015-11
    data_files:
      - split: train
        path: data/CC-MAIN-2015-11/*
  - config_name: CC-MAIN-2015-06
    data_files:
      - split: train
        path: data/CC-MAIN-2015-06/*
  - config_name: CC-MAIN-2014-52
    data_files:
      - split: train
        path: data/CC-MAIN-2014-52/*
  - config_name: CC-MAIN-2014-49
    data_files:
      - split: train
        path: data/CC-MAIN-2014-49/*
  - config_name: CC-MAIN-2014-42
    data_files:
      - split: train
        path: data/CC-MAIN-2014-42/*
  - config_name: CC-MAIN-2014-41
    data_files:
      - split: train
        path: data/CC-MAIN-2014-41/*
  - config_name: CC-MAIN-2014-35
    data_files:
      - split: train
        path: data/CC-MAIN-2014-35/*
  - config_name: CC-MAIN-2014-23
    data_files:
      - split: train
        path: data/CC-MAIN-2014-23/*
  - config_name: CC-MAIN-2014-15
    data_files:
      - split: train
        path: data/CC-MAIN-2014-15/*
  - config_name: CC-MAIN-2014-10
    data_files:
      - split: train
        path: data/CC-MAIN-2014-10/*
  - config_name: CC-MAIN-2013-48
    data_files:
      - split: train
        path: data/CC-MAIN-2013-48/*
  - config_name: CC-MAIN-2013-20
    data_files:
      - split: train
        path: data/CC-MAIN-2013-20/*

πŸ“š FineWeb-Edu-score-2

FineWeb-Edu: The finest collection of educational content the web has to offer

1.3 trillion tokens of the finest educational data the 🌐 web has to offer

What is it?

πŸ“š FineWeb-Edu dataset consists of 1.3T tokens (FineWeb-Edu) and 5.4T tokens of educational web pages filtered from 🍷 FineWeb dataset. This is the 5.4 trillion version.

Note: this version uses a lower educational score threshold = 2, which results in more coverage, but lower quality documents.

To enhance FineWeb's quality, we developed an educational quality classifier using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.

The Dataset Curation section details the process for creating the dataset.

What is being released?

Along with the dataset, which includes all filtered CommonCrawl dumps since 2013, we also release the educational classifier used for the filtering as well as the code for training it and running inference at: https://github.com/huggingface/cosmopedia/tree/main/classification.

How to load the dataset

Similarily to FineWeb, You can load the full dataset or a specific crawl/dump. Dumps have the format CC-MAIN-(year)-(week number).

Using 🏭 datatrove

from datatrove.pipeline.readers import ParquetReader

# limit determines how many documents will be streamed (remove for all)
data_reader = ParquetReader("hf://datasets/HuggingFaceFW/fineweb-edu-score-2", glob_pattern="data/*/*.parquet", limit=1000)
data_reader = ParquetReader("hf://datasets/HuggingFaceFW/fineweb-edu-score-2/CC-MAIN-2024-10", limit=1000) 
for document in data_reader():
    # do something with document
    print(document)

###############################    
# OR for a processing pipeline:
###############################

from datatrove.executor import LocalPipelineExecutor
from datatrove.pipeline.readers import ParquetReader
from datatrove.pipeline.filters import LambdaFilter
from datatrove.pipeline.writers import JsonlWriter

pipeline_exec = LocalPipelineExecutor(
    pipeline=[
        ParquetReader("hf://datasets/HuggingFaceFW/fineweb-edu-score-2/CC-MAIN-2024-10", limit=1000),
        LambdaFilter(lambda doc: "hugging" in doc.text),
        JsonlWriter("some-output-path")
    ],
    tasks=10
)
pipeline_exec.run()

Using datasets

from datasets import load_dataset

fw = load_dataset("HuggingFaceFW/fineweb-edu-score-2", name="CC-MAIN-2024-10", split="train", streaming=True)

Dataset curation

A new approach has recently emerged for filtering LLM training datasets: using synthetic data to develop classifiers for identifying educational content. This technique was used in the trainings of LLama3, Claude3 and Phi3, but its large-scale impact on web data filtering hasn't been fully explored or published.

The highly popular Phi3 models were trained on 3.3 and 4.8 trillion tokens, with the paper stating: β€œOur training data consists of heavily filtered publicly available web data (according to the 'educational level') from various open internet sources, as well as synthetic LLM-generated data". Similarly, the LLama3 blog post notes: β€œWe found that previous generations of Llama are good at identifying high-quality data, so we used Llama 2 to help build the text-quality classifiers that are powering Llama 3.” However these classifiers and filtered datasets are not publicly available. To enhance FineWeb's quality, we developed an educational quality classifier using annotations generated by LLama3-70B-Instruct to create FineWeb-Edu.

Annotation

We used Llama3-70B-Instruct to score 500k FineWeb samples for their educational quality on a scale from 0 to 5.

We explored various prompts and found that the additive scale by Yuan et al. worked best. To avoid the LLM favoring highly technical pages like arXiv abstracts and submissions, we focused on grade-school and middle-school level knowledge. By setting a threshold of 3 (on a scale of 0 to 5) during the filtering process, we were able to also retain some high-level educational pages. The final prompt can be found in this blog post TODO.

We also experimented with different LLMs: Llama3-70B-Instruct, Mixtral-8x-7B-Instruct, and Mixtral-8x22B-Instruct. Llama3 and Mixtral-8x22B produced similar scores, while Mixtral-8x7B tended to be more generous, not fully adhering to the score scale. Verga et al. suggest using multiple LLMs as juries. We tried averaging the scores from the three models, but this shifted the distribution to the right due to the higher scores from Mixtral-8x7B. Training on a dataset filtered with a classifier using jury annotations performed worse than using a classifier based on Llama3 annotations. We hypothesize that the jury-based approach retains more low-quality samples.

Classifier training

We fine-tuned a Bert-like regression model using these annotations, based on Snowflake-arctic-embed. When converted to a binary classification using a score of 3 as a threshold for keeping and removing files, the model achieved an F1 score of 82%. The classification of FineWeb 15T tokens took 6k H100 GPU hours.

The classifier is available at: https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/

Filtering

We filtered out samples with scores lower than 3. This removed 92% of the dataset, leaving us with 1.2T educational tokens. Our ablation demonstrated that this refined dataset significantly outperforms the original FineWeb dumps and even the best dump, FineWeb-2024-10. To retain more tokens, we also experimented with a less strict threshold of 2 instead of 3. This approach preserved 4.5T tokens and still outperformed the non-filtered dataset. TODO: add ablation results

We release these two dataset as FineWeb-Edu and FineWeb-Edu-score-2 along with the classifier.

Dataset performance evaluation and ablations

We conducted our dataset performance ablations and evaluations by training 1.8B parameters models on 28B tokens and then 350B billion tokens to validate the results.

The detailed configurations for training the models can be found here (TODO).

FineWeb-Edu outperforms FineWeb and other web datasets on all popular benchmarks.

TODO: add barplots & agg_score curves

You will find these models on this collection. The FineWeb-Edu ablation model (trained on 350B tokens) is available at https://huggingface.co/HuggingFaceFW/ablation-model-fineweb-edu.

Considerations for Using the Data

This section is copied from the parent dataset: FineWeb.

Social Impact of Dataset

With the release of this dataset we aim to make model training more accessible to the machine learning community at large.

While multiple open-weights models with strong performance have been publicly released in the past, more often than not these releases are not accompanied by the corresponding training dataset. This is unfortunate as the dataset specificities and characteristics have been demonstrated to have a very large impact and role in the performances of the models. As the creation of a high quality training dataset is a fundamental requirement to training an LLM capable of excelling at downstream tasks, with 🍷 FineWeb we (a) not only make the dataset creation process more transparent, by sharing our entire processing setup including the codebase used, we also (b) help alleviate the costs of dataset curation, both in time and in compute, for model creators by publicly releasing our dataset with the community.

Discussion of Biases

Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level. However, there are still a significant number of documents present in the final dataset that could be considered toxic or contain harmful content. As 🍷 FineWeb was sourced from the web as a whole, any harmful biases typically present in it may be reproduced on our dataset.

We deliberately avoided using machine learning filtering methods that define text quality based on the similarity to a β€œgold” source such as wikipedia or toxicity classifiers as these methods have been known to disproportionately remove content in specific dialects and overclassify as toxic text related to specific social identities, respectively.

Other Known Limitations

As a consequence of some of the filtering steps applied, it is likely that code content is not prevalent in our dataset. If you are training a model that should also perform code tasks, we recommend you use 🍷 FineWeb with a code dataset, such as The Stack v2. You should also probably consider complementing 🍷 FineWeb with specialized curated sources (such as Wikipedia, for example) as they will likely have better formatting than the wikipedia content included in 🍷 FineWeb (we did not tailor the processing to individual websites).

Additional Information

Licensing Information

The dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0 license. The use of this dataset is also subject to CommonCrawl's Terms of Use.

Future work

We plan to work on better educational classifier to improve the quality of FineWeb-Edu.

Citation Information

@software{lozhkov2024fineweb-edu,
  author = {Lozhkov, Anton and Ben Allal, Loubna and von Werra, Leandro and Wolf, Thomas},
  title = {FineWeb-Edu},
  month = May,
  year = 2024,
  url = {https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu}
}