Datasets:
Fix typos
Browse files
README.md
CHANGED
@@ -40,7 +40,7 @@ To construct Zyda-2, we took the best open-source datasets available: [Zyda](htt
|
|
40 |
|
41 |
An early version of Zyda-2 was used as the primary dataset for phase 1 pretraining of our Zamba2 [series](https://huggingface.co/Zyphra/Zamba2-7B) [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda-2 as a pretraining dataset.
|
42 |
|
43 |
-
According to our evaluations, Zyda-2 is the most performant per-token open dataset available. Zyda-2 excels at educational and natural language reasoning content. For code performance, we
|
44 |
|
45 |
|
46 |
<center>
|
@@ -51,11 +51,11 @@ According to our evaluations, Zyda-2 is the most performant per-token open datas
|
|
51 |
For more information, please see our [technical blog](https://www.zyphra.com/post/building-zyda-2).
|
52 |
|
53 |
## How to download
|
54 |
-
Since we preserved the schemas of original component datasets, attempting to
|
55 |
|
56 |
To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
|
57 |
|
58 |
-
Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda-2--repo-type dataset`
|
59 |
|
60 |
Commands to download individual components:
|
61 |
- DCLM: `ds = datasets.load_dataset("Zyphra/Zyda-2", name="dclm_crossdeduped", split="train")`
|
@@ -90,9 +90,9 @@ We found the following optimal weights (in the sense of weights in the resultant
|
|
90 |
|
91 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
92 |
|
93 |
-
Each component has
|
94 |
|
95 |
-
However, in all components the document text is in `text` column, and unique document
|
96 |
|
97 |
Our Zyda-1 and Dolma-CC versions also have two additional columns corresponding to prediction of Nvidia's quality model (https://huggingface.co/nvidia/quality-classifier-deberta): `quality_prob` and `quality_pred`.
|
98 |
|
@@ -114,7 +114,7 @@ FineWeb-Edu-score2: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-sc
|
|
114 |
|
115 |
#### Personal and Sensitive Information
|
116 |
|
117 |
-
As a language
|
118 |
|
119 |
## Bias, Risks, and Limitations
|
120 |
|
@@ -139,3 +139,4 @@ If you use our dataset to train a model, please cite us at:
|
|
139 |
day = {15}
|
140 |
}
|
141 |
```
|
|
|
|
40 |
|
41 |
An early version of Zyda-2 was used as the primary dataset for phase 1 pretraining of our Zamba2 [series](https://huggingface.co/Zyphra/Zamba2-7B) [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda-2 as a pretraining dataset.
|
42 |
|
43 |
+
According to our evaluations, Zyda-2 is the most performant per-token open dataset available. Zyda-2 excels at educational and natural language reasoning content. For code performance, we recommend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder).
|
44 |
|
45 |
|
46 |
<center>
|
|
|
51 |
For more information, please see our [technical blog](https://www.zyphra.com/post/building-zyda-2).
|
52 |
|
53 |
## How to download
|
54 |
+
Since we preserved the schemas of original component datasets, attempting to download the whole dataset using `datasets.load_dataset()` might fail during the stage of generating a split.
|
55 |
|
56 |
To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
|
57 |
|
58 |
+
Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda-2 --repo-type dataset`
|
59 |
|
60 |
Commands to download individual components:
|
61 |
- DCLM: `ds = datasets.load_dataset("Zyphra/Zyda-2", name="dclm_crossdeduped", split="train")`
|
|
|
90 |
|
91 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
92 |
|
93 |
+
Each component has their own individual schema. Please, consult with their respective sources for exact information.
|
94 |
|
95 |
+
However, in all components the document text is in the `text` column, and the unique document id is in the `nemo_id` column.
|
96 |
|
97 |
Our Zyda-1 and Dolma-CC versions also have two additional columns corresponding to prediction of Nvidia's quality model (https://huggingface.co/nvidia/quality-classifier-deberta): `quality_prob` and `quality_pred`.
|
98 |
|
|
|
114 |
|
115 |
#### Personal and Sensitive Information
|
116 |
|
117 |
+
As a language modeling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
|
118 |
|
119 |
## Bias, Risks, and Limitations
|
120 |
|
|
|
139 |
day = {15}
|
140 |
}
|
141 |
```
|
142 |
+
|