--- dataset_info: config_name: default splits: - name: train num_examples: 1594197267 license: odc-by pretty_name: Zyda task_categories: - text-generation language: - en size_categories: - n>1T configs: - config_name: default data_files: - split: train path: data/*/*/* - config_name: zyda_no_starcoder data_files: - split: train path: data/zyda_no_starcoder/*/* - config_name: zyda_arxiv_only data_files: - split: train path: data/zyda_no_starcoder/zyda_arxiv/* - config_name: zyda_c4-en_only data_files: - split: train path: data/zyda_no_starcoder/c4_en/* - config_name: zyda_peS2o_only data_files: - split: train path: data/zyda_no_starcoder/zyda_peS2o/* - config_name: zyda_pile-uncopyrighted_only data_files: - split: train path: data/zyda_no_starcoder/zyda_pile-uncopyrighted/* - config_name: zyda_refinedweb_only data_files: - split: train path: data/zyda_no_starcoder/zyda_refinedweb/* - config_name: zyda_slimpajama_only data_files: - split: train path: data/zyda_no_starcoder/zyda_slimpajama/* - config_name: zyda_starcoder_only data_files: - split: train path: data/zyda_starcoder/*/* --- # Zyda Zyda is a 1.3T language modeling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training. An early version of Zyda was used as the primary dataset for phase 1 pretraining of [Zamba](https://arxiv.org/abs/2405.16712), a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a pretraining dataset. Models trained on Zyda significantly outperform identical models of the Pythia suite trained on the [Pile](https://arxiv.org/abs/2101.00027) for 300B tokens. Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset. According to our evaluations, Zyda is the most performant per-token open dataset available in its non-starcoder variant on language tasks. The Zyda starcoder variant ties with fineweb.
Zyda performance across steps.
These results are aggregate scores of classic language modeling evaluations (PIQA, WinoGrande, OpenBookQA, ARC-Easy, ARC-Challenge) across time for a 1.4B model trained on 50B tokens of each dataset. ## How to download Full dataset: ``` import datasets ds = datasets.load_dataset("Zyphra/Zyda", split="train") ``` Full dataset without StarCoder: ``` import datasets ds = datasets.load_dataset("Zyphra/Zyda", name="zyda_no_starcoder", split="train") ``` For downloading individual components put their name in the name arg of `load_dataset()`: - zyda_arxiv_only - zyda_c4-en_only - zyda_peS2o_only - zyda_pile-uncopyrighted_only - zyda_refinedweb_only - zyda_slimpajama_only - zyda_starcoder_only ## Breakdown by component | Component | Download size (parquet, GBs) | Documents (millions) | gpt-neox tokens (billions) | | --- | --- | --- | --- | | zyda_refinedweb_only | 1,712.4 | 920.5 | 564.8 | | zyda_c4-en_only | 366.7 | 254.5 | 117.5 | | zyda_slimpajama_only | 594.7 | 142.3 | 242.3 | | zyda_pile-uncopyrighted_only | 189.4 | 64.9 | 82.9 | | zyda_peS2o_only | 133.7 | 35.7 | 53.4 | | zyda_arxiv_only | 8.3 | 0.3 | 4.7 | | zyda_starcoder_only | 299.5 | 176.1 | 231.3 | | Total | 3,304.7 | 1,594.2 | 1,296.7 | ### Dataset Description - **Curated by:** Zyphra - **Language(s) (NLP):** Primarily English - **License:** Open Data Commons License ## Dataset Structure Dataset fields: - `text`: contains actual text for training - `source`: component the text is coming from - `filtering_features`: precomputed values of different features that were used for filtering (converted to json string) - `source_other`: metadata from the source dataset (converted to json string) ### Source Data Zyda was drawn from seven component open datasets which are well-regarded in the community. These are: Pile Uncopyrighted: https://huggingface.co/datasets/monology/pile-uncopyrighted C4-en: https://huggingface.co/datasets/allenai/c4 peS2o: https://huggingface.co/datasets/allenai/peS2o RefinedWeb: https://huggingface.co/datasets/tiiuae/falcon-refinedweb SlimPajama: https://huggingface.co/datasets/cerebras/SlimPajama-627B arxiv_s2orc_parsed: https://huggingface.co/datasets/ArtifactAI/arxiv_s2orc_parsed StarCoder: https://huggingface.co/datasets/bigcode/starcoderdata
Composition of Zyda
#### Data Collection and Processing Zyda was created using a two stage post-processing pipeline consisting of *filtering* and *deduplication*. For the filtering stage, we utilized a set of hand-crafted and tuned filters derived from a number of sources such as C4, RedPajama, and Gopher, in addition to our own filters. For the deduplication stage, we used minhash approximate deduplication. We deduplicated on 13-grams and used a minhash signature size of 128 and filtered out documents above a Jaccard similarity of 0.4. For full details on our data processing, see the [Zyda technical report](https://arxiv.org/abs/2406.01981) and our [dataset processing code](https://github.com/Zyphra/Zyda_processing). #### Personal and Sensitive Information As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters. ## Bias, Risks, and Limitations As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content. ## Licensing Information We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this dataset, you are also bound by any license agreements and terms of use of the original data sources. ## Citation If you use our dataset to train a model, please cite us at: ``` @misc{tokpanov2024zyda, title={Zyda: A 1.3T Dataset for Open Language Modeling}, author={Yury Tokpanov and Beren Millidge and Paolo Glorioso and Jonathan Pilault and Adam Ibrahim and James Whittington and Quentin Anthony}, year={2024}, eprint={2406.01981}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```