Datasets:

Modalities:
Text
Languages:
English
ArXiv:
DOI:
License:
BerenMillidge commited on
Commit
c019136
1 Parent(s): a9a7363

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -2
README.md CHANGED
@@ -63,7 +63,15 @@ configs:
63
 
64
  <!-- Provide a quick summary of the dataset. -->
65
 
66
- Zyda is a 1.3T language modelling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
 
 
 
 
 
 
 
 
67
 
68
 
69
  ## How to download
@@ -126,7 +134,7 @@ StarCoder: https://huggingface.co/datasets/bigcode/starcoderdata
126
 
127
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
128
 
129
- [More Information Needed]
130
 
131
 
132
 
 
63
 
64
  <!-- Provide a quick summary of the dataset. -->
65
 
66
+ Zyda is a 1.3T language modelling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
67
+
68
+ Zyda is the primary dataset used in phase 1 pretraining of [Zamba](https://arxiv.org/abs/2405.16712), a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a dataset.
69
+
70
+ Models trained on Zyda significantly outperform models of the Pythia suite trained on the pile on parameter-matched models across 300B tokens.
71
+
72
+ Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset.
73
+
74
+ According to our evaluations, Zyda is the most performant per-token open dataset available.
75
 
76
 
77
  ## How to download
 
134
 
135
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
136
 
137
+
138
 
139
 
140