Datasets:

Modalities:
Text
Languages:
English
ArXiv:
DOI:
License:
yury-zyphra commited on
Commit
833bdcb
1 Parent(s): 0e1780e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -2
README.md CHANGED
@@ -1,4 +1,18 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: odc-by
3
  pretty_name: Zyda
4
  task_categories:
@@ -53,13 +67,30 @@ This dataset card aims to be a base template for new datasets. It has been gener
53
 
54
  ## Dataset Details
55
 
 
56
 
57
 
58
- ### Dataset Description
59
 
60
- <!-- Provide a longer summary of what this dataset is. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
61
 
62
 
 
 
 
63
 
64
  - **Curated by:** Zyphra
65
  - **Funded by [optional]:** [More Information Needed]
@@ -95,6 +126,13 @@ This dataset card aims to be a base template for new datasets. It has been gener
95
 
96
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
97
 
 
 
 
 
 
 
 
98
  [More Information Needed]
99
 
100
  ## Dataset Creation
@@ -109,6 +147,20 @@ This dataset card aims to be a base template for new datasets. It has been gener
109
 
110
  <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
111
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
112
  #### Data Collection and Processing
113
 
114
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
 
1
  ---
2
+ dataset_info:
3
+ features:
4
+ - name: text
5
+ dtype: string
6
+ - name: source
7
+ dtype: string
8
+ - name: filtering_features
9
+ dtype: string
10
+ - source_other: dump
11
+ dtype: string
12
+ splits:
13
+ - name: train
14
+ num_examples: 1594197267
15
+ download_size: 3.3TB
16
  license: odc-by
17
  pretty_name: Zyda
18
  task_categories:
 
67
 
68
  ## Dataset Details
69
 
70
+ Dataset is created by filtering and deduplicating openly available datasets.
71
 
72
 
73
+ ## How to download
74
 
75
+ Full dataset:
76
+ `datasets.load_dataset("Zyphra/Zyda", split="train")`
77
+
78
+ Full dataset without StarCoder:
79
+ `datasets.load_dataset("Zyphra/Zyda", name="zyda_no_starcoder", split="train")`
80
+
81
+ For downloading individual components put their name in the name arg of `load_dataset()`:
82
+ - zyda_arxiv_only
83
+ - zyda_c4-en_only
84
+ - zyda_peS2o_only
85
+ - zyda_pile-uncopyrighted_only
86
+ - zyda_refinedweb_only
87
+ - zyda_slimpajama_only
88
+ - zyda_starcoder_only
89
 
90
 
91
+ ### Dataset Description
92
+
93
+ <!-- Provide a longer summary of what this dataset is. -->
94
 
95
  - **Curated by:** Zyphra
96
  - **Funded by [optional]:** [More Information Needed]
 
126
 
127
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
128
 
129
+ Dataset fields:
130
+ - `text`: contains actual text for training
131
+ - `source`: component the text is coming from
132
+ - `filtering_features`: precomputed values of different features that were used for filtering (converted to json string)
133
+ - `source_other`: metadata from the source dataset (converted to json string)
134
+
135
+
136
  [More Information Needed]
137
 
138
  ## Dataset Creation
 
147
 
148
  <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
149
 
150
+ Pile Uncopyrighted: https://huggingface.co/datasets/monology/pile-uncopyrighted
151
+
152
+ C4-en: https://huggingface.co/datasets/allenai/c4
153
+
154
+ peS2o: https://huggingface.co/datasets/allenai/peS2o
155
+
156
+ RefinedWeb: https://huggingface.co/datasets/tiiuae/falcon-refinedweb
157
+
158
+ SlimPajama: https://huggingface.co/datasets/cerebras/SlimPajama-627B
159
+
160
+ arxiv_s2orc_parsed: https://huggingface.co/datasets/ArtifactAI/arxiv_s2orc_parsed
161
+
162
+ StarCoder: https://huggingface.co/datasets/bigcode/starcoderdata
163
+
164
  #### Data Collection and Processing
165
 
166
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->