system HF staff commited on
Commit
04eb92e
1 Parent(s): 48298b7

Update files from the datasets library (from 1.16.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.16.0

Files changed (2) hide show
  1. README.md +645 -49
  2. wikipedia.py +6 -6
README.md CHANGED
@@ -1,7 +1,629 @@
1
  ---
2
- languages:
3
- - en
 
 
 
4
  paperswithcode_id: null
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
  # Dataset Card for "wikipedia"
@@ -64,6 +686,8 @@ The datasets are built from the Wikipedia dump
64
  contains the content of one full Wikipedia article with cleaning to strip
65
  markdown and unwanted sections (references, etc.).
66
 
 
 
67
  ### Supported Tasks and Leaderboards
68
 
69
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -78,16 +702,6 @@ We show detailed information for up to 5 configurations of the dataset.
78
 
79
  ### Data Instances
80
 
81
- #### 20200501.de
82
-
83
- - **Size of downloaded dataset files:** 5531.82 MB
84
- - **Size of the generated dataset:** 7716.79 MB
85
- - **Total amount of disk used:** 13248.61 MB
86
-
87
- An example of 'train' looks as follows.
88
- ```
89
-
90
- ```
91
 
92
  #### 20200501.en
93
 
@@ -95,70 +709,52 @@ An example of 'train' looks as follows.
95
  - **Size of the generated dataset:** 17481.07 MB
96
  - **Total amount of disk used:** 34877.35 MB
97
 
98
- An example of 'train' looks as follows.
99
  ```
100
-
 
 
 
 
 
 
101
  ```
102
 
 
 
 
 
 
 
103
  #### 20200501.fr
104
 
105
  - **Size of downloaded dataset files:** 4653.55 MB
106
  - **Size of the generated dataset:** 6182.24 MB
107
  - **Total amount of disk used:** 10835.79 MB
108
 
109
- An example of 'train' looks as follows.
110
- ```
111
-
112
- ```
113
-
114
  #### 20200501.frr
115
 
116
  - **Size of downloaded dataset files:** 9.05 MB
117
  - **Size of the generated dataset:** 5.88 MB
118
  - **Total amount of disk used:** 14.93 MB
119
 
120
- An example of 'train' looks as follows.
121
- ```
122
-
123
- ```
124
-
125
  #### 20200501.it
126
 
127
  - **Size of downloaded dataset files:** 2970.57 MB
128
  - **Size of the generated dataset:** 3809.89 MB
129
  - **Total amount of disk used:** 6780.46 MB
130
 
131
- An example of 'train' looks as follows.
132
- ```
133
-
134
- ```
135
-
136
  ### Data Fields
137
 
138
- The data fields are the same among all splits.
139
 
140
- #### 20200501.de
141
- - `title`: a `string` feature.
142
- - `text`: a `string` feature.
143
-
144
- #### 20200501.en
145
- - `title`: a `string` feature.
146
- - `text`: a `string` feature.
147
-
148
- #### 20200501.fr
149
- - `title`: a `string` feature.
150
- - `text`: a `string` feature.
151
-
152
- #### 20200501.frr
153
- - `title`: a `string` feature.
154
- - `text`: a `string` feature.
155
-
156
- #### 20200501.it
157
- - `title`: a `string` feature.
158
- - `text`: a `string` feature.
159
 
160
  ### Data Splits
161
 
 
 
162
  | name | train |
163
  |------------|------:|
164
  |20200501.de |3140341|
 
1
  ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - crowdsourced
6
+ pretty_name: Wikipedia
7
  paperswithcode_id: null
8
+ licenses:
9
+ - cc-by-sa-3-0
10
+ - gfdl-1-3-or-later
11
+ task_categories:
12
+ - sequence-modeling
13
+ task_ids:
14
+ - language-modeling
15
+ source_datasets:
16
+ - original
17
+ multilinguality:
18
+ - multilingual
19
+ size_categories:
20
+ - n<1K
21
+ - 1K<n<10K
22
+ - 10K<n<100K
23
+ - 100K<n<1M
24
+ - 1M<n<10M
25
+ - 10M<n<100M
26
+ languages:
27
+ 20200501-aa:
28
+ - aa
29
+ 20200501-ab:
30
+ - ab
31
+ 20200501-ace:
32
+ - ace
33
+ 20200501-ady:
34
+ - unknown
35
+ 20200501-af:
36
+ - af
37
+ 20200501-ak:
38
+ - ak
39
+ 20200501-als:
40
+ - als
41
+ 20200501-am:
42
+ - am
43
+ 20200501-an:
44
+ - an
45
+ 20200501-ang:
46
+ - ang
47
+ 20200501-ar:
48
+ - ar
49
+ 20200501-arc:
50
+ - arc
51
+ 20200501-arz:
52
+ - arz
53
+ 20200501-as:
54
+ - as
55
+ 20200501-ast:
56
+ - ast
57
+ 20200501-atj:
58
+ - atj
59
+ 20200501-av:
60
+ - av
61
+ 20200501-ay:
62
+ - ay
63
+ 20200501-az:
64
+ - az
65
+ 20200501-azb:
66
+ - azb
67
+ 20200501-ba:
68
+ - ba
69
+ 20200501-bar:
70
+ - bar
71
+ 20200501-bat-smg:
72
+ - sgs
73
+ 20200501-bcl:
74
+ - bcl
75
+ 20200501-be:
76
+ - be
77
+ 20200501-be-x-old:
78
+ - unknown
79
+ 20200501-bg:
80
+ - bg
81
+ 20200501-bh:
82
+ - bh
83
+ 20200501-bi:
84
+ - bi
85
+ 20200501-bjn:
86
+ - bjn
87
+ 20200501-bm:
88
+ - bm
89
+ 20200501-bn:
90
+ - bn
91
+ 20200501-bo:
92
+ - bo
93
+ 20200501-bpy:
94
+ - bpy
95
+ 20200501-br:
96
+ - br
97
+ 20200501-bs:
98
+ - bs
99
+ 20200501-bug:
100
+ - bug
101
+ 20200501-bxr:
102
+ - bxr
103
+ 20200501-ca:
104
+ - ca
105
+ 20200501-cbk-zam:
106
+ - cbk
107
+ 20200501-cdo:
108
+ - cdo
109
+ 20200501-ce:
110
+ - ce
111
+ 20200501-ceb:
112
+ - ceb
113
+ 20200501-ch:
114
+ - ch
115
+ 20200501-cho:
116
+ - cho
117
+ 20200501-chr:
118
+ - chr
119
+ 20200501-chy:
120
+ - chy
121
+ 20200501-ckb:
122
+ - ckb
123
+ 20200501-co:
124
+ - co
125
+ 20200501-cr:
126
+ - cr
127
+ 20200501-crh:
128
+ - crh
129
+ 20200501-cs:
130
+ - cs
131
+ 20200501-csb:
132
+ - csb
133
+ 20200501-cu:
134
+ - cu
135
+ 20200501-cv:
136
+ - cv
137
+ 20200501-cy:
138
+ - cy
139
+ 20200501-da:
140
+ - da
141
+ 20200501-de:
142
+ - de
143
+ 20200501-din:
144
+ - din
145
+ 20200501-diq:
146
+ - diq
147
+ 20200501-dsb:
148
+ - dsb
149
+ 20200501-dty:
150
+ - dty
151
+ 20200501-dv:
152
+ - dv
153
+ 20200501-dz:
154
+ - dz
155
+ 20200501-ee:
156
+ - ee
157
+ 20200501-el:
158
+ - el
159
+ 20200501-eml:
160
+ - eml
161
+ 20200501-en:
162
+ - en
163
+ 20200501-eo:
164
+ - eo
165
+ 20200501-es:
166
+ - es
167
+ 20200501-et:
168
+ - et
169
+ 20200501-eu:
170
+ - eu
171
+ 20200501-ext:
172
+ - ext
173
+ 20200501-fa:
174
+ - fa
175
+ 20200501-ff:
176
+ - ff
177
+ 20200501-fi:
178
+ - fi
179
+ 20200501-fiu-vro:
180
+ - vro
181
+ 20200501-fj:
182
+ - fj
183
+ 20200501-fo:
184
+ - fo
185
+ 20200501-fr:
186
+ - fr
187
+ 20200501-frp:
188
+ - frp
189
+ 20200501-frr:
190
+ - frr
191
+ 20200501-fur:
192
+ - fur
193
+ 20200501-fy:
194
+ - fy
195
+ 20200501-ga:
196
+ - ga
197
+ 20200501-gag:
198
+ - gag
199
+ 20200501-gan:
200
+ - gan
201
+ 20200501-gd:
202
+ - gd
203
+ 20200501-gl:
204
+ - gl
205
+ 20200501-glk:
206
+ - glk
207
+ 20200501-gn:
208
+ - gn
209
+ 20200501-gom:
210
+ - gom
211
+ 20200501-gor:
212
+ - gor
213
+ 20200501-got:
214
+ - got
215
+ 20200501-gu:
216
+ - gu
217
+ 20200501-gv:
218
+ - gv
219
+ 20200501-ha:
220
+ - ha
221
+ 20200501-hak:
222
+ - hak
223
+ 20200501-haw:
224
+ - haw
225
+ 20200501-he:
226
+ - he
227
+ 20200501-hi:
228
+ - hi
229
+ 20200501-hif:
230
+ - hif
231
+ 20200501-ho:
232
+ - ho
233
+ 20200501-hr:
234
+ - hr
235
+ 20200501-hsb:
236
+ - hsb
237
+ 20200501-ht:
238
+ - ht
239
+ 20200501-hu:
240
+ - hu
241
+ 20200501-hy:
242
+ - hy
243
+ 20200501-ia:
244
+ - ia
245
+ 20200501-id:
246
+ - id
247
+ 20200501-ie:
248
+ - ie
249
+ 20200501-ig:
250
+ - ig
251
+ 20200501-ii:
252
+ - ii
253
+ 20200501-ik:
254
+ - ik
255
+ 20200501-ilo:
256
+ - ilo
257
+ 20200501-inh:
258
+ - inh
259
+ 20200501-io:
260
+ - io
261
+ 20200501-is:
262
+ - is
263
+ 20200501-it:
264
+ - it
265
+ 20200501-iu:
266
+ - iu
267
+ 20200501-ja:
268
+ - ja
269
+ 20200501-jam:
270
+ - jam
271
+ 20200501-jbo:
272
+ - jbo
273
+ 20200501-jv:
274
+ - jv
275
+ 20200501-ka:
276
+ - ka
277
+ 20200501-kaa:
278
+ - kaa
279
+ 20200501-kab:
280
+ - kab
281
+ 20200501-kbd:
282
+ - kbd
283
+ 20200501-kbp:
284
+ - kbp
285
+ 20200501-kg:
286
+ - kg
287
+ 20200501-ki:
288
+ - ki
289
+ 20200501-kj:
290
+ - kj
291
+ 20200501-kk:
292
+ - kk
293
+ 20200501-kl:
294
+ - kl
295
+ 20200501-km:
296
+ - km
297
+ 20200501-kn:
298
+ - kn
299
+ 20200501-ko:
300
+ - ko
301
+ 20200501-koi:
302
+ - koi
303
+ 20200501-krc:
304
+ - krc
305
+ 20200501-ks:
306
+ - ks
307
+ 20200501-ksh:
308
+ - ksh
309
+ 20200501-ku:
310
+ - ku
311
+ 20200501-kv:
312
+ - kv
313
+ 20200501-kw:
314
+ - kw
315
+ 20200501-ky:
316
+ - ky
317
+ 20200501-la:
318
+ - la
319
+ 20200501-lad:
320
+ - lad
321
+ 20200501-lb:
322
+ - lb
323
+ 20200501-lbe:
324
+ - lbe
325
+ 20200501-lez:
326
+ - lez
327
+ 20200501-lfn:
328
+ - lfn
329
+ 20200501-lg:
330
+ - lg
331
+ 20200501-li:
332
+ - li
333
+ 20200501-lij:
334
+ - lij
335
+ 20200501-lmo:
336
+ - lmo
337
+ 20200501-ln:
338
+ - ln
339
+ 20200501-lo:
340
+ - lo
341
+ 20200501-lrc:
342
+ - lrc
343
+ 20200501-lt:
344
+ - lt
345
+ 20200501-ltg:
346
+ - ltg
347
+ 20200501-lv:
348
+ - lv
349
+ 20200501-mai:
350
+ - mai
351
+ 20200501-map-bms:
352
+ - unknown
353
+ 20200501-mdf:
354
+ - mdf
355
+ 20200501-mg:
356
+ - mg
357
+ 20200501-mh:
358
+ - mh
359
+ 20200501-mhr:
360
+ - mhr
361
+ 20200501-mi:
362
+ - mi
363
+ 20200501-min:
364
+ - min
365
+ 20200501-mk:
366
+ - mk
367
+ 20200501-ml:
368
+ - ml
369
+ 20200501-mn:
370
+ - mn
371
+ 20200501-mr:
372
+ - mr
373
+ 20200501-mrj:
374
+ - mrj
375
+ 20200501-ms:
376
+ - ms
377
+ 20200501-mt:
378
+ - mt
379
+ 20200501-mus:
380
+ - mus
381
+ 20200501-mwl:
382
+ - mwl
383
+ 20200501-my:
384
+ - my
385
+ 20200501-myv:
386
+ - myv
387
+ 20200501-mzn:
388
+ - mzn
389
+ 20200501-na:
390
+ - na
391
+ 20200501-nah:
392
+ - nah
393
+ 20200501-nap:
394
+ - nap
395
+ 20200501-nds:
396
+ - nds
397
+ 20200501-nds-nl:
398
+ - nds-nl
399
+ 20200501-ne:
400
+ - ne
401
+ 20200501-new:
402
+ - new
403
+ 20200501-ng:
404
+ - ng
405
+ 20200501-nl:
406
+ - nl
407
+ 20200501-nn:
408
+ - nn
409
+ 20200501-no:
410
+ - "no"
411
+ 20200501-nov:
412
+ - nov
413
+ 20200501-nrm:
414
+ - nrf
415
+ 20200501-nso:
416
+ - nso
417
+ 20200501-nv:
418
+ - nv
419
+ 20200501-ny:
420
+ - ny
421
+ 20200501-oc:
422
+ - oc
423
+ 20200501-olo:
424
+ - olo
425
+ 20200501-om:
426
+ - om
427
+ 20200501-or:
428
+ - or
429
+ 20200501-os:
430
+ - os
431
+ 20200501-pa:
432
+ - pa
433
+ 20200501-pag:
434
+ - pag
435
+ 20200501-pam:
436
+ - pam
437
+ 20200501-pap:
438
+ - pap
439
+ 20200501-pcd:
440
+ - pcd
441
+ 20200501-pdc:
442
+ - pdc
443
+ 20200501-pfl:
444
+ - pfl
445
+ 20200501-pi:
446
+ - pi
447
+ 20200501-pih:
448
+ - pih
449
+ 20200501-pl:
450
+ - pl
451
+ 20200501-pms:
452
+ - pms
453
+ 20200501-pnb:
454
+ - pnb
455
+ 20200501-pnt:
456
+ - pnt
457
+ 20200501-ps:
458
+ - ps
459
+ 20200501-pt:
460
+ - pt
461
+ 20200501-qu:
462
+ - qu
463
+ 20200501-rm:
464
+ - rm
465
+ 20200501-rmy:
466
+ - rmy
467
+ 20200501-rn:
468
+ - rn
469
+ 20200501-ro:
470
+ - ro
471
+ 20200501-roa-rup:
472
+ - rup
473
+ 20200501-roa-tara:
474
+ - unknown
475
+ 20200501-ru:
476
+ - ru
477
+ 20200501-rue:
478
+ - rue
479
+ 20200501-rw:
480
+ - rw
481
+ 20200501-sa:
482
+ - sa
483
+ 20200501-sah:
484
+ - sah
485
+ 20200501-sat:
486
+ - sat
487
+ 20200501-sc:
488
+ - sc
489
+ 20200501-scn:
490
+ - scn
491
+ 20200501-sco:
492
+ - sco
493
+ 20200501-sd:
494
+ - sd
495
+ 20200501-se:
496
+ - se
497
+ 20200501-sg:
498
+ - sg
499
+ 20200501-sh:
500
+ - sh
501
+ 20200501-si:
502
+ - si
503
+ 20200501-simple:
504
+ - simple
505
+ 20200501-sk:
506
+ - sk
507
+ 20200501-sl:
508
+ - sl
509
+ 20200501-sm:
510
+ - sm
511
+ 20200501-sn:
512
+ - sn
513
+ 20200501-so:
514
+ - so
515
+ 20200501-sq:
516
+ - sq
517
+ 20200501-sr:
518
+ - sr
519
+ 20200501-srn:
520
+ - srn
521
+ 20200501-ss:
522
+ - ss
523
+ 20200501-st:
524
+ - st
525
+ 20200501-stq:
526
+ - stq
527
+ 20200501-su:
528
+ - su
529
+ 20200501-sv:
530
+ - sv
531
+ 20200501-sw:
532
+ - sw
533
+ 20200501-szl:
534
+ - szl
535
+ 20200501-ta:
536
+ - ta
537
+ 20200501-tcy:
538
+ - tcy
539
+ 20200501-te:
540
+ - te
541
+ 20200501-tet:
542
+ - tdt
543
+ 20200501-tg:
544
+ - tg
545
+ 20200501-th:
546
+ - th
547
+ 20200501-ti:
548
+ - ti
549
+ 20200501-tk:
550
+ - tk
551
+ 20200501-tl:
552
+ - tl
553
+ 20200501-tn:
554
+ - tn
555
+ 20200501-to:
556
+ - to
557
+ 20200501-tpi:
558
+ - tpi
559
+ 20200501-tr:
560
+ - tr
561
+ 20200501-ts:
562
+ - ts
563
+ 20200501-tt:
564
+ - tt
565
+ 20200501-tum:
566
+ - tum
567
+ 20200501-tw:
568
+ - tw
569
+ 20200501-ty:
570
+ - ty
571
+ 20200501-tyv:
572
+ - tyv
573
+ 20200501-udm:
574
+ - udm
575
+ 20200501-ug:
576
+ - ug
577
+ 20200501-uk:
578
+ - uk
579
+ 20200501-ur:
580
+ - ur
581
+ 20200501-uz:
582
+ - uz
583
+ 20200501-ve:
584
+ - ve
585
+ 20200501-vec:
586
+ - vec
587
+ 20200501-vep:
588
+ - vep
589
+ 20200501-vi:
590
+ - vi
591
+ 20200501-vls:
592
+ - vls
593
+ 20200501-vo:
594
+ - vo
595
+ 20200501-wa:
596
+ - wa
597
+ 20200501-war:
598
+ - war
599
+ 20200501-wo:
600
+ - wo
601
+ 20200501-wuu:
602
+ - wuu
603
+ 20200501-xal:
604
+ - xal
605
+ 20200501-xh:
606
+ - xh
607
+ 20200501-xmf:
608
+ - xmf
609
+ 20200501-yi:
610
+ - yi
611
+ 20200501-yo:
612
+ - yo
613
+ 20200501-za:
614
+ - za
615
+ 20200501-zea:
616
+ - zea
617
+ 20200501-zh:
618
+ - zh
619
+ 20200501-zh-classical:
620
+ - lzh
621
+ 20200501-zh-min-nan:
622
+ - nan
623
+ 20200501-zh-yue:
624
+ - yue
625
+ 20200501-zu:
626
+ - zu
627
  ---
628
 
629
  # Dataset Card for "wikipedia"
 
686
  contains the content of one full Wikipedia article with cleaning to strip
687
  markdown and unwanted sections (references, etc.).
688
 
689
+ The articles have been parsed using the ``mwparserfromhell`` tool.
690
+
691
  ### Supported Tasks and Leaderboards
692
 
693
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
702
 
703
  ### Data Instances
704
 
 
 
 
 
 
 
 
 
 
 
705
 
706
  #### 20200501.en
707
 
 
709
  - **Size of the generated dataset:** 17481.07 MB
710
  - **Total amount of disk used:** 34877.35 MB
711
 
712
+ An example looks as follows.
713
  ```
714
+ {
715
+ 'title': 'Yangliuqing',
716
+ 'text': 'Yangliuqing () is a market town in Xiqing District, in the western suburbs of Tianjin,
717
+ ...
718
+ and traditional period furnishings and crafts.\n\nSee also \n\nList of township-level divisions of Tianjin\n\nReferences \n\n
719
+ http://arts.cultural-china.com/en/65Arts4795.html\n\nCategory:Towns in Tianjin'
720
+ }
721
  ```
722
 
723
+ #### 20200501.de
724
+
725
+ - **Size of downloaded dataset files:** 5531.82 MB
726
+ - **Size of the generated dataset:** 7716.79 MB
727
+ - **Total amount of disk used:** 13248.61 MB
728
+
729
  #### 20200501.fr
730
 
731
  - **Size of downloaded dataset files:** 4653.55 MB
732
  - **Size of the generated dataset:** 6182.24 MB
733
  - **Total amount of disk used:** 10835.79 MB
734
 
 
 
 
 
 
735
  #### 20200501.frr
736
 
737
  - **Size of downloaded dataset files:** 9.05 MB
738
  - **Size of the generated dataset:** 5.88 MB
739
  - **Total amount of disk used:** 14.93 MB
740
 
 
 
 
 
 
741
  #### 20200501.it
742
 
743
  - **Size of downloaded dataset files:** 2970.57 MB
744
  - **Size of the generated dataset:** 3809.89 MB
745
  - **Total amount of disk used:** 6780.46 MB
746
 
 
 
 
 
 
747
  ### Data Fields
748
 
749
+ The data fields are the same among all splits and configurations:
750
 
751
+ - `title`: a `string` feature corresponding to the title of the article
752
+ - `text`: a `string` feature corresponding to the text content of the article
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
753
 
754
  ### Data Splits
755
 
756
+ Here are the sizes for several configurations:
757
+
758
  | name | train |
759
  |------------|------:|
760
  |20200501.de |3140341|
wikipedia.py CHANGED
@@ -374,8 +374,8 @@ class WikipediaConfig(datasets.BuilderConfig):
374
  **kwargs: keyword arguments forwarded to super.
375
  """
376
  super(WikipediaConfig, self).__init__(
377
- name="{0}.{1}".format(date, language),
378
- description="Wikipedia dataset for {0}, parsed from {1} dump.".format(language, date),
379
  **kwargs,
380
  )
381
  self.date = date
@@ -465,16 +465,16 @@ class Wikipedia(datasets.BeamBasedBuilder):
465
  if not elem.tag.endswith("page"):
466
  continue
467
  namespace = elem.tag[:-4]
468
- title = elem.find("./{0}title".format(namespace)).text
469
- ns = elem.find("./{0}ns".format(namespace)).text
470
- id_ = elem.find("./{0}id".format(namespace)).text
471
 
472
  # Filter pages that are not in the "main" namespace.
473
  if ns != "0":
474
  elem.clear()
475
  continue
476
 
477
- raw_content = elem.find("./{0}revision/{0}text".format(namespace)).text
478
  elem.clear()
479
 
480
  # Filter redirects.
 
374
  **kwargs: keyword arguments forwarded to super.
375
  """
376
  super(WikipediaConfig, self).__init__(
377
+ name=f"{date}.{language}",
378
+ description=f"Wikipedia dataset for {language}, parsed from {date} dump.",
379
  **kwargs,
380
  )
381
  self.date = date
 
465
  if not elem.tag.endswith("page"):
466
  continue
467
  namespace = elem.tag[:-4]
468
+ title = elem.find(f"./{namespace}title").text
469
+ ns = elem.find(f"./{namespace}ns").text
470
+ id_ = elem.find(f"./{namespace}id").text
471
 
472
  # Filter pages that are not in the "main" namespace.
473
  if ns != "0":
474
  elem.clear()
475
  continue
476
 
477
+ raw_content = elem.find(f"./{namespace}revision/{namespace}text").text
478
  elem.clear()
479
 
480
  # Filter redirects.