File size: 9,631 Bytes
d93fd32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
364a0be
 
d93fd32
 
 
364a0be
d93fd32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3edc4bd
 
 
 
d93fd32
 
 
 
 
 
98b6a0d
c64c62f
d93fd32
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: SuperWikiNEXT-32B
paperswithcode_id: null
license:
- cc-by-sa-3.0
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
multilinguality:
- multilingual
language:
- af
- ar
- ast
- az
- be
- bg
- bn
- ca
- ce
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- he
- hi
- hr
- hu
- hy
- id
- it
- ja
- ka
- kk
- ko
- la
- lt
- lv
- mk
- ms
- my
- nl
- nn
- 'no'
- pl
- pt
- ro
- ru
- sh
- sk
- sl
- sr
- sv
- ta
- tg
- th
- tr
- uk
- ur
- uz
- vi
- zh
size_categories:
- 10B<n<100B
---

# Dataset Card for SuperWikiNEXT-32B

![](Waifu.png "Based off from Wikipe-tan (Maid, cyan hair, short hair) and Wikipedia's globe logo.")

*Waifu to catch your attention.*

## Dataset Details

### Dataset Description

*SuperWikipedia-NEXT* is an enhanced version of the SuperWIKI dataset. Which SuperWIKI was born out of the thought of a better filtered Wikipedia while retaining markdowns. 
*SuperWikipedia-NEXT* contains **~32.44B** Tokens (llama-2-7b-chat-tokenizer) / **~27.92B** Tokens (RWKV Tokenizer) from approximately **60** "High quality" / "Selected" languages.

- **Curated by:** KaraKaraWitch
- **Funded by:** Recursal.ai (I work there lol)
- **Shared by:** KaraKaraWitch
- **Language(s) (NLP):** Many. Refer to the data below for a list of languages.
- **License:** cc-by-sa-4.0, 

### Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **Source Data:** [https://dumps.wikimedia.org/other/enterprise_html/](https://dumps.wikimedia.org/other/enterprise_html)

### Dataset Summary

Wikipedia dataset containing cleaned articles of all languages.
The dataset is manually built from Wikipedia HTML dumps with each split for each language. 
Each example contains the content of one full Wikipedia article.

### Supported Tasks and Leaderboards

The dataset is generally used for Language Modelling.

### Languages

We have selected the following Wikipedia's:

```
af.wikipedia.org
ar.wikipedia.org
ast.wikipedia.org
az.wikipedia.org
be.wikipedia.org
bg.wikipedia.org
bn.wikipedia.org
ca.wikipedia.org
ce.wikipedia.org
cs.wikipedia.org
cy.wikipedia.org
da.wikipedia.org
de.wikipedia.org
el.wikipedia.org
en.wikipedia.org
eo.wikipedia.org
es.wikipedia.org
et.wikipedia.org
eu.wikipedia.org
fa.wikipedia.org
fi.wikipedia.org
fr.wikipedia.org
gl.wikipedia.org
he.wikipedia.org
hi.wikipedia.org
hr.wikipedia.org
hu.wikipedia.org
hy.wikipedia.org
id.wikipedia.org
it.wikipedia.org
ja.wikipedia.org
ka.wikipedia.org
kk.wikipedia.org
ko.wikipedia.org
la.wikipedia.org
lt.wikipedia.org
lv.wikipedia.org
min.wikipedia.org
mk.wikipedia.org
ms.wikipedia.org
my.wikipedia.org
nl.wikipedia.org
nn.wikipedia.org
no.wikipedia.org
pl.wikipedia.org
pt.wikipedia.org
ro.wikipedia.org
ru.wikipedia.org
sh.wikipedia.org
simple.wikipedia.org
sk.wikipedia.org
sl.wikipedia.org
sr.wikipedia.org
sv.wikipedia.org
ta.wikipedia.org
tg.wikipedia.org
th.wikipedia.org
tr.wikipedia.org
uk.wikipedia.org
ur.wikipedia.org
uz.wikipedia.org
vi.wikipedia.org
zh-min-nan.wikipedia.org
zh.wikipedia.org
zh-yue.wikipedia.org
```

*`.wikipedia.org`* extensions have been added for your convenience.

### Selection of Wikipedia

We deem a particular Wikipedia language as high quality if:

1. Has a total article count of `>100,000`.
2. Has a `Depth > 5.1`.

*Depth is calculated using the following equation:*

`depth = (article_edits / total_pages) * ((total_pages - articles) / articles) ** 2`

This formula is directly taken from [list of Wikipedias.](https://meta.wikimedia.org/wiki/Wikipedia_article_depth)

### Filtering

Extensive HTML and markdown filtering has been done to derive the final dataset.

For HTML:

1. Parse the article content with BeautifulSoup.
2. We first extract out titles from the Soup. 
3. Drop (As in, don't process / skip processing) *Stub articles.* To ensure multilanguage coverage, we use a list of stub names found across multiple languages using wikidata. (We have included the template names within `wikipedia_template.py`)
4. Drop *Lsjbot* bot created articles.
5. Collapse styles with `data-mw` component into its next sibling.
6. Remove raw `href` links. (Text of href == href link)
7. Remove citation needed Templates
8. Remove citation Templates
9. Remove Redirect Templates
10. Drop articles where the article content consists of 50% or more of tables and lists.
11. Remove message boxes. (Orange alert boxes on top of articles)
12. Remove infoboxes boxes. (Infoboxes on the right)
13. Selectively remove tables which consist of just empty spaces. (Number of `<td>` elements > len(text_size) and text_size < 50)
14. Cleanup latex code.
15. Empty `class` attributes and `data-mw` attributes

For Markdown:

1. Cleanup punctuations.
2. Collect text length (normalized text to NKFC, keeping CJK characters as is while decomposing Arabic characters, Counting double width characters as 2 instead of 1, )
3. Filter based on the collected text length (If the article is less than 1000 characters long, it is dropped.)

The final Markdown text and additional data is included in the jsonl file. Additionally, the scripts used are located in the main directory of this folder as well.

### Data keys

Users can run `less` to see the contents. A sample and a list of dictionary keys have been provided below:

```json
{
    "text": "\n**Tharman Shanmugaratnam** PBM (born 25 February 1957) is a Singaporean politician and economist. He is the President of Singapore since 2023. \n\nHe was Senior Minister of Singapore between 2019 and 2023. He was also the Coordinating Minister for Social Policies between 2015 and 2023, and Chairman of the Monetary Authority of Singapore between 2011 and 2023.\n\nOn 8 June 2023, Tharman announced his plans to run for president in the 2023 presidential election. He was elected on 2 September 2023 in a landslide victory, winning 70.40% of the vote.\n\nEarly life and education\n------------------------\n\nTharman was born in the Colony of Singapore in 1957. He studied at the Anglo-Chinese School. When he was studying there, he was not interested in his studies and was not disciplined. However, he liked to read and tried out poetry. During his time at Anglo-Chinese School, he created four poets with his schoolmates. Also, he was interested in sports and spent most of his time playing sports. He even joined his school's hockey team.\n\nThen, he attended the London School of Economics (LSE), graduating with a Bachelor of Science degree in economics.\n\nAfter getting his bachelor's, Tharman went on to study at Wolfson College at the University of Cambridge. There, he completed a Master of Philosophy degree in economics. \n\nTharman then became a student at the Harvard Kennedy School at Harvard University, where he finished a Master in Public Administration (MPA) degree. He was a student activist there. He explored left-wing politics, as he did not agree with the ruling People's Action Party back in Singapore.\n\nTharman was a recipient of the Lucius N. Littauer Fellows Award. The award is given to students with MPA's who showed academic excellence and leadership.In 2011, the LSE gave him an Honorary Fellowship.<...TRUNCATED IN SAMPLE>",
    "meta": {
        "title": "Tharman Shanmugaratnam",
        "mostly_tablelist": false,
        "tablelist_ratio": [
            4082,
            8644,
            0.47223507635354
        ],
        "infobox": [
            "<...TRUNCATED IN SAMPLE>"
        ],
        "td_tables": [],
        "text_length": 5553
    }
}
```

```
text: str (Markdown text)
meta: dict (Contains additional metadata / meta)
  - title: str (Article Title)
  - mostly_tablelist: bool (Internal flag for HTML step 10)
  - tablelist_ratio: list (Internal data, used to compute mostly_tablelist.)
  - infobox: list (A list of extracted infoboxes with data-mw attribute for the raw html data.)
  - td_tables: list (Extracted tables from HTML step 13)
  - text_length: int (Obtained from markdown step 2)
```

### Dataset Curators

KaraKaraWitch. (I typically hangout in PygmalionAI discord, sometimes EleutherAI. If something is wrong, `@karakarawitch` on discord.)

I'd be happy if you could spread the word and recommend this dataset over wikitext for your use cases `:)`

### Licensing Information

Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (un-versioned, with no invariant sections, front-cover texts, or back-cover texts). 

Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.

Recursal Waifus (The banner image) are licensed under CC-BY-SA. 
They do not represent the related websites in any official capacity unless otherwise or announced by the website. 
You may use them as a banner image. However, you must always link back to the dataset.

### Citation Information

```
@ONLINE{superwiki-next,
  title         = {SuperWikiNEXT-32B},
  author        = {KaraKaraWitch, recursal.ai},
  year          = {2024},
  howpublished  = {\url{https://huggingface.co/datasets/recursal/SuperWikipedia-NEXT}},
}
```