Files changed (1) hide show
  1. README.md +155 -0
README.md CHANGED
@@ -66,6 +66,161 @@ Each data source was filtered individually with respect to the underlying data.
66
  ## Global Deduplication
67
  After the web and curated sources were filtered, all sources globally deduplicated to create TxT360. The tips and tricks behind the deduplication process are included.
68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
  # Citation
70
 
71
  **BibTeX:**
 
66
  ## Global Deduplication
67
  After the web and curated sources were filtered, all sources globally deduplicated to create TxT360. The tips and tricks behind the deduplication process are included.
68
 
69
+ ## Dataset Structure
70
+ The dataset is organized under the ```data``` directory, with each subdirectory representing a data subset.
71
+ Below is an overview of the structure and organization of these subsets:
72
+ ```
73
+ β”œβ”€β”€ data
74
+ β”œβ”€β”€ common-crawl # data subset
75
+ β”œβ”€β”€ CC-MAIN-2013-20 # common-crawl dumps
76
+ β”œβ”€β”€ 1-1 # number of duplicates
77
+ β”œβ”€β”€ chunk_000_0000.jsonl.gz
78
+ β”œβ”€β”€ ...
79
+ β”œβ”€β”€ 2-5
80
+ β”œβ”€β”€ chunk_000_0000.jsonl.gz
81
+ β”œβ”€β”€ ...
82
+ β”œβ”€β”€ ...
83
+ β”œβ”€β”€ CC-MAIN-2013-48
84
+ β”œβ”€β”€ 1-1
85
+ β”œβ”€β”€ chunk_000_0000.jsonl.gz
86
+ β”œβ”€β”€ ...
87
+ β”œβ”€β”€ ...
88
+ β”œβ”€β”€ ...
89
+ β”œβ”€β”€ dm_math
90
+ β”œβ”€β”€ full_data_1
91
+ β”œβ”€β”€ 0_11255.jsonl
92
+ β”œβ”€β”€ ...
93
+ β”œβ”€β”€ full_data_2
94
+ β”œβ”€β”€ 10000_11255.jsonl
95
+ β”œβ”€β”€ ...
96
+ β”œβ”€β”€ arxiv
97
+ β”œβ”€β”€ 1-1 # number of duplicates
98
+ β”œβ”€β”€ 0_171.jsonl
99
+ β”œβ”€β”€ ...
100
+ β”œβ”€β”€ 2-5
101
+ β”œβ”€β”€ 0_2.jsonl
102
+ β”œβ”€β”€ ...
103
+ β”œβ”€β”€ ...
104
+ β”œβ”€β”€ europarl
105
+ β”œβ”€β”€ 1-1 # number of duplicates
106
+ β”œβ”€β”€ 0_6.jsonl
107
+ β”œβ”€β”€ ...
108
+ β”œβ”€β”€ 2-5
109
+ β”œβ”€β”€ 0_0.jsonl
110
+ β”œβ”€β”€ ...
111
+ β”œβ”€β”€ ...
112
+ β”œβ”€β”€ ...
113
+ ```
114
+
115
+ ### Common Crawl (common-crawl)
116
+ Each subdirectory under ```common-crawl``` corresponds to a specific dump of the dataset.
117
+ Inside each dump folder, the data is further segmented into buckets based on the number of duplicates identified during deduplication:
118
+
119
+ - ```1-1```: Contains documents with no duplicates across the dataset.
120
+ - ```2-5```, ```6-10```, ```11-100```, ```101-1000```, ```1001-30000000```: Each contains documents that fall within the respective range of duplicates.
121
+
122
+ Example path: ```data/common-crawl/CC-MAIN-2013-20/1-1/chunk_000_0000.jsonl.gz```
123
+
124
+ ### DM Math (dm_math)
125
+ The ```dm_math``` subset is divided into two subfolders to comply with the limit of 10,000 files per folder in a HuggingFace Repository:
126
+
127
+ Example path: ```data/dm_math/full_data_1/0_11255.jsonl```
128
+
129
+ ### Others
130
+ Similar to common-crawl, other curated data subsets, such as arxiv, europal, etc., are organized by the number of duplicates:
131
+ - ```1-1```, ```2-5```, ```6-10```, ```11-100```, ```101-1000```, ```1001-inf```
132
+
133
+ Kindly note that some data subsets might not include the folder ```1001-inf``` (```1001-30000000``` in ```common-crawl```) or might contain only a few documents in such a folder due to the rarity of documents duplicated more than 1000 times.
134
+
135
+ ## Data Schema
136
+
137
+ ### Common Crawl (common-crawl)
138
+ The documents in common-crawl follow the schema:
139
+ ```python
140
+ {'text': '...', # texts in the document
141
+ 'meta':
142
+ {
143
+ 'lang': 'en', # top 1 language detected by fastText model
144
+ 'lang_score': 0.912118136882782, # language score for the detected language
145
+ 'url': 'http://www.shopgirljen.com/2017/10/lg-celebrates-5-years-of-lg-oled-tv.html', # the url that raw webpage is scraped from
146
+ 'timestamp': '2024-07-24T00:56:12Z', # timestamp from Common Crawl raw data
147
+ 'cc-path': 'crawl-data/CC-MAIN-2024-30/segments/1720763518130.6/warc/CC-MAIN-20240723224601-20240724014601-00300.warc.gz', # the path of the document in the raw Common Crawl
148
+ 'quality_signals':
149
+ {
150
+ 'url_score': 0.0,
151
+ 'fraction_of_duplicate_lines': 0.0,
152
+ 'fraction_of_characters_in_duplicate_lines': 0.0,
153
+ 'fraction_of_duplicate_paragraphs': 0.0,
154
+ 'fraction_of_characters_in_duplicate_paragraphs': 0.0,
155
+ 'fraction_of_characters_in_most_common_ngram': [[2, 0.03626373626373627],
156
+ [3, 0.03296703296703297],
157
+ [4, 0.01868131868131868]],
158
+ 'fraction_of_characters_in_duplicate_ngrams': [[5, 0.01868131868131868],
159
+ [6, 0.01868131868131868],
160
+ [7, 0.01868131868131868],
161
+ [8, 0.0],
162
+ [9, 0.0],
163
+ [10, 0.0]],
164
+ 'fraction_of_words_corrected_in_lines': 0.0,
165
+ 'fraction_of_lines_ending_with_ellipsis': 0.0,
166
+ 'fraction_of_lines_starting_with_bullet_point': 0.0,
167
+ 'fraction_of_lines_with_toxic_words': 0.0,
168
+ 'num_of_lines_with_toxic_words': 0,
169
+ 'num_of_toxic_words': 0,
170
+ 'word_count': 358,
171
+ 'mean_word_length': 5.083798882681564,
172
+ 'num_of_sentences': 19,
173
+ 'symbol_to_word_ratio': 0.0,
174
+ 'fraction_of_words_with_alpha_character': 1.0,
175
+ 'num_of_stop_words': 82,
176
+ 'num_of_paragraphs': 0,
177
+ 'has_curly_bracket': False,
178
+ 'has_lorem_ipsum': False,
179
+ 'orig_text_has_dup_lines': False
180
+ },
181
+ 'dup_signals':
182
+ {
183
+ 'dup_doc_count': 166, # the number of duplicated documents
184
+ 'dup_dump_count': 57, # the number of dumps that the duplicated documents are from
185
+ 'dup_details': # the dump distribution of the duplicated documents
186
+ {
187
+ '2024-30': 2,
188
+ '2024-26': 1,
189
+ '2024-22': 1,
190
+ ...
191
+ }
192
+ }
193
+ },
194
+ 'subset': 'commoncrawl'}
195
+ ```
196
+
197
+ Please note that documents without duplicates, located in folders `*/1-1/`, have an empty `dup_signals` field.
198
+ Additionally, some documents with duplicates might include an `unknown` entry within the `dup_details`.
199
+ One example could be:
200
+ ```python
201
+ {'text': '...', # texts in the document
202
+ 'meta':
203
+ {
204
+ ...
205
+ 'dup_signals':
206
+ {
207
+ 'dup_doc_count': 7,
208
+ 'dup_dump_count': 3,
209
+ 'dup_details':
210
+ {
211
+ 'unknown': 4,
212
+ '2024-30': 1,
213
+ '2024-26': 1,
214
+ '2024-22': 1,
215
+ }
216
+ }
217
+ },
218
+ 'subset': 'commoncrawl'}
219
+ ```
220
+ This occurs because the distribution of duplicates across dumps was not recorded in the early stages of our deduplication process, and only the total count of duplicate documents (`dup_doc_count`) was maintained.
221
+ Due to the high cost of rerunning the deduplication, we have opted to label these distributions as `unknown` when integrating them with other documents for which duplicate distribution data is available.
222
+ In these cases, the `dup_dump_count` is calculated excluding the `unknown`.
223
+
224
  # Citation
225
 
226
  **BibTeX:**