sabilmakbar commited on
Commit
a91c8a9
1 Parent(s): 9cd3184

Add New Lang (#5)

Browse files

- Update codebase and add new lang (f2ca54309b7c8beee4199ecb619efb18c55543d6)
- Update license based on Wikipedia License Info (b91d0d15f3e5be462055c801edab630a1c240754)
- Update README (ba2fccd8416f3906447496f8f83e8a9a3b23b12e)
- Update license tag (f960d921db868008036d688f5592dcd222fd639c)

README.md CHANGED
@@ -3,30 +3,8 @@ annotations_creators:
3
  - no-annotation
4
  language_creators:
5
  - crowdsourced
6
- language:
7
- - ace
8
- - ban
9
- - bjn
10
- - bug
11
- - gor
12
- - km
13
- - id
14
- - jv
15
- - lo
16
- - mad
17
- - mnw
18
- - min
19
- - ms
20
- - my
21
- - nia
22
- - shn
23
- - su
24
- - tet
25
- - th
26
- - vi
27
  license:
28
- - cc-by-sa-3.0
29
- - gfdl
30
  multilinguality:
31
  - multilingual
32
  source_datasets:
@@ -34,6 +12,36 @@ source_datasets:
34
  task_categories:
35
  - text-generation
36
  - fill-mask
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  task_ids:
38
  - language-modeling
39
  - masked-language-modeling
@@ -73,6 +81,9 @@ dataset_info:
73
  - name: cbk_zam
74
  num_bytes: 2033238
75
  num_examples: 3285
 
 
 
76
  - name: gor
77
  num_bytes: 6239133
78
  num_examples: 15359
@@ -142,8 +153,8 @@ dataset_info:
142
  - name: war
143
  num_bytes: 454304567
144
  num_examples: 1266394
145
- download_size: 6358317628
146
- dataset_size: 6351100780
147
  - config_name: seawiki_dedup_all
148
  features:
149
  - name: url
@@ -171,6 +182,9 @@ dataset_info:
171
  - name: cbk_zam
172
  num_bytes: 1579651
173
  num_examples: 2242
 
 
 
174
  - name: gor
175
  num_bytes: 6217480
176
  num_examples: 15290
@@ -240,8 +254,8 @@ dataset_info:
240
  - name: war
241
  num_bytes: 454266479
242
  num_examples: 1266204
243
- download_size: 6347597222
244
- dataset_size: 6340363195
245
  - config_name: seawiki_with_countries_all
246
  features:
247
  - name: url
@@ -338,6 +352,9 @@ dataset_info:
338
  - name: phl_pag
339
  num_bytes: 1370162
340
  num_examples: 2665
 
 
 
341
  - name: sgp_ms
342
  num_bytes: 419662356
343
  num_examples: 368628
@@ -359,8 +376,8 @@ dataset_info:
359
  - name: vnm_vi
360
  num_bytes: 1603057632
361
  num_examples: 1288680
362
- download_size: 6358317628
363
- dataset_size: 8501775123
364
  - config_name: seawiki_with_countries_dedup_all
365
  features:
366
  - name: url
@@ -457,6 +474,9 @@ dataset_info:
457
  - name: phl_pag
458
  num_bytes: 764869
459
  num_examples: 1108
 
 
 
460
  - name: sgp_ms
461
  num_bytes: 414783365
462
  num_examples: 348045
@@ -478,15 +498,13 @@ dataset_info:
478
  - name: vnm_vi
479
  num_bytes: 1602828123
480
  num_examples: 1287910
481
- download_size: 6347597222
482
- dataset_size: 8476086704
483
  ---
484
 
485
  # **SEA Wikipedia Data Repository**
486
  ---
487
- license: cc-by-sa-3.0
488
- ---
489
- Welcome to SEA Wikipedia Data Repository. The datasets are extracted from [Wikipedia HF](https://huggingface.co/datasets/wikipedia) and processed using the scripts available in this repository for reproducibility purpose.
490
 
491
  # Getting Started #
492
  ### To read the datasets directly ###
@@ -517,7 +535,7 @@ dataset = load_dataset(
517
 
518
  # **FAQS**
519
  ### What are the available languages provided in dataset and from which country?
520
- You may check the following tables to understand the current coverage of this dataset (languages, countries, data size & volume).
521
 
522
  #### 1. Table of Countries and its Country Code
523
  | Country Code | Country Name | Wiki Info |
@@ -535,38 +553,38 @@ You may check the following tables to understand the current coverage of this da
535
  | vnm | Vietnam | [Wiki Link](https://en.wikipedia.org/wiki/Vietnam) |
536
 
537
  #### 2. Table of Languages and Countries of its speakers
538
- | Lang Code | Lang Name | Country Codes Spoken | Wiki Info | Total Data | Total Size (MiB rounded) |
539
- | :---: | :---: | :---: | :--- | ---: | ---: |
540
- | ace | Acehnese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Acehnese_language) | 12904 | 4.64 |
541
- | ban | Balinese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Balinese_language) | 19837 | 16.56 |
542
- | bjn | Banjarese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banjarese_language) | 10437 | 6.35 |
543
- | bcl | Central Bicolano | phl | [Wiki Link](https://en.wikipedia.org/wiki/Banjarese_language) | 15743 | 19.32 |
544
- | bug | Buginese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Buginese_language) | 9793 | 1.98 |
545
- | ceb | Cebuano | phl | [Wiki Link](https://en.wikipedia.org/wiki/Central_Bicolano_language) | *Not Supported Yet* | *Not Supported Yet* |
546
- | cbk (ISO 639-3) <br> cbk_zam (WikiMedia) | Zamboanga Chavacano/Chavacano | phl | [Wiki Link](https://en.wikipedia.org/wiki/Chavacano) | 3285 | 1.94 |
547
- | gor | Gorontalo | idn | [Wiki Link](https://en.wikipedia.org/wiki/Gorontalo_language) | 14514 | 5.71 |
548
- | ilo | Ilokano | phl | [Wiki Link](https://en.wikipedia.org/wiki/Ilocano_language) | 15371 | 15.94 |
549
- | km | Khmer | khm | [Wiki Link](https://en.wikipedia.org/wiki/Khmer_language) | 11994 | 98.37 |
550
- | id | Indonesian | idn | [Wiki Link](https://en.wikipedia.org/wiki/Indonesian_language) | 654287 | 1049.93 |
551
- | jv | Javanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Javanese_language) | 72667 | 66.54 |
552
- | lo | Lao | lao | [Wiki Link](https://en.wikipedia.org/wiki/Lao_language) | 5014 | 14.53 |
553
- | mad | Madurese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Madurese_language) | 1192 | 1.54 |
554
- | map_bms | Banyumasan <br>(Dialect of Javanese) | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banyumasan_dialect) | 11832 | 4.83 |
555
- | mnw | Mon | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Mon_language) | 3296 | 45.13 |
556
- | min | Minangkabau | idn | [Wiki Link](https://en.wikipedia.org/wiki/Minangkabau_language) | 225858 | 110.99 |
557
- | ms | Malay | mys, sgp, brn, idn | [Wiki Link](https://en.wikipedia.org/wiki/Malay_language) | 346186 | 391.43 |
558
- | my | Burmese | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Burmese_language) | 109310 | 298.85 |
559
- | nia | Nias | idn | [Wiki Link](https://en.wikipedia.org/wiki/Nias_language) | 1650 | 1.85 |
560
- | pag | Pangasinan | phl | [Wiki Link](https://en.wikipedia.org/wiki/Pangasinan_language) | 2665 | 1.31 |
561
- | pam | Kapampangan | phl | [Wiki Link](https://en.wikipedia.org/wiki/Kapampangan_language) | 9006 | 7.84 |
562
- | shn | Shan | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Shan_language) | 13945 | 32.19 |
563
- | su | Sundanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Sundanese_language) | 61494 | 45.21 |
564
- | ta | Tamil | mys, sgp | [Wiki Link](https://en.wikipedia.org/wiki/Tamil_language) | 160651 | 0.15 |
565
- | tet | Tetum | tls, idn | [Wiki Link](https://en.wikipedia.org/wiki/Tetum_language) | 1465 | 1.39 |
566
- | th | Thai | tha | [Wiki Link](https://en.wikipedia.org/wiki/Thai_language) | 159719 | 966.00 |
567
- | tl | Tagalog | phl | [Wiki Link](https://en.wikipedia.org/wiki/Tagalog_language) | 45341 | 81.42 |
568
- | vi | Vietnamese | vnm | [Wiki Link](https://en.wikipedia.org/wiki/Vietnamese_language) | 1288680 | 1528.79 |
569
- | war | Waray | phl | [Wiki Link](https://en.wikipedia.org/wiki/Waray_language) | 1266394 | 433.26 |
570
 
571
 
572
  #### 3. Table of Token Statistics for Covered Languages
@@ -580,6 +598,7 @@ The token statistics is generated using ```tiktoken``` using encoder for GPT-4.
580
  | bjn | 1,935,505 | 184.28115776444827 | 2 | 30,170 | [36.0, 38.0, 39.0, 40.0, 42.0, 51.0, 82.0, 151.0, 367.0] |
581
  | bug | 553,693 | 55.54147858360919 | 1 | 13,951 | [31.0, 42.0, 43.0, 46.0, 48.0, 50.0, 52.0, 55.0, 57.0] |
582
  | cbk_zam | 402,703 | 179.6177520071365 | 2 | 6,494 | [35.0, 41.2, 56.0, 69.0, 90.0, 120.0, 138.0, 155.0, 294.9] |
 
583
  | gor | 1,575,766 | 103.05860039241334 | 2 | 5,525 | [55.0, 58.0, 60.0, 62.0, 64.0, 66.0, 69.0, 75.0, 96.0] |
584
  | id | 325,411,713 | 491.22975561670967 | 1 | 198,597 | [54.0, 93.0, 123.0, 145.0, 180.0, 226.0, 332.0, 543.0, 1068.0] |
585
  | ilo | 5,593,491 | 363.94632051532307 | 17 | 18,202 | [59.0, 80.0, 91.0, 111.0, 152.0, 213.0, 303.0, 461.0, 856.0] |
@@ -602,7 +621,7 @@ The token statistics is generated using ```tiktoken``` using encoder for GPT-4.
602
  | th | 330,964,733 | 2,072.8566695476807 | 1 | 289,150 | [231.0, 390.0, 546.0, 727.0, 969.0, 1276.0, 1741.0, 2533.0, 4361.0] |
603
  | tl | 27,789,730 | 615.8934864032269 | 7 | 60,728 | [73.0, 116.0, 161.0, 214.0, 281.0, 360.0, 465.0, 666.0, 1136.0] |
604
  | vi | 546,481,258 | 424.3163404275143 | 3 | 246,463 | [46.0, 64.0, 71.0, 80.0, 86.0, 92.0, 120.0, 240.0, 824.0] |
605
- | war | 117,438,315 | 92.74833676090108 | 1 | 25,689 | [60.0, 77.0, 81.0, 84.0, 87.0, 90.0, 94.0, 99.0, 110.0] |`
606
 
607
  Some other languages in SEA that are already exists its Wiki Index at Wikimedia might be missing from this list. Any lang update PR is greatly appreciated!
608
 
@@ -612,23 +631,7 @@ The data available in here are processed with following flows:
612
  2. Furthermore, the ```title``` and ```text``` data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars/UTF-8 chars validated). You may check this [ ```dedup_raw_wiki_data.py```](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) script to understand its implementation.
613
 
614
  ### How do I extract new Wikipedia Dataset of SEA languages?
615
- You may check to the script [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data.py) to understand its implementations, or you can adjust the bash provided in [_```extract_raw_wiki_data_sea.sh```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data_sea.sh) to extract it on your own.
616
-
617
- ### How do I extract new Wikipedia Dataset of SEA languages?
618
- You may visit this [Wikipedia Dump Index](https://dumps.wikimedia.org/backup-index.html) to check any latest available data and this link [Wikipedia Language Coverage](https://meta.wikimedia.org/wiki/List_of_Wikipedias_by_country) to map into any languages that you're wanting to extract. Please note that this dataset is extensible to any languages of your choice.
619
-
620
- ### To replicate the whole dataset generation process ###
621
- 1. Set-up a new Python/Conda Environment (recommended Python version: 3.9.6 to 3.9.18 or 3.10.0 to 3.10.13) and install the requirements on ```requirements.txt``` use this codebase via ```pip install -r requirements.txt```.
622
-
623
- 2. Activate the chosen Python/Conda environment which the requirements are being installed.
624
-
625
- 3. Force install ```multiprocess==0.70.15``` by using ```pip install multiprocess==0.70.15``` to avoid [this issue](https://github.com/huggingface/datasets/issues/5613#issuecomment-1703169594) (there's no other workaround for now)
626
-
627
- 4. Run this ```sh``` script for extractions from Wikiedia HF using ```sh extract_raw_wiki_data_sea.sh```<br>
628
- This script will run [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data.py) to construct the Wiki Dataset.
629
-
630
- 5. Run this ```sh``` script for deduplications from extracted data in Step 4 using ```sh dedup_raw_wiki_data_sea.sh```<br>
631
- This script will run [_```dedup_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) to do Wiki Dataset Clenasing. Please note that the cleansing process can be language/dialect specific.
632
 
633
  ## Citation Info:
634
  ```
 
3
  - no-annotation
4
  language_creators:
5
  - crowdsourced
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  license:
7
+ - cc-by-sa-4.0
 
8
  multilinguality:
9
  - multilingual
10
  source_datasets:
 
12
  task_categories:
13
  - text-generation
14
  - fill-mask
15
+ language:
16
+ - ace
17
+ - ban
18
+ - bcl
19
+ - bjn
20
+ - bug
21
+ - cbk
22
+ - ceb
23
+ - gor
24
+ - id
25
+ - ilo
26
+ - jv
27
+ - km
28
+ - lo
29
+ - mad
30
+ - min
31
+ - mnw
32
+ - ms
33
+ - my
34
+ - nia
35
+ - pag
36
+ - pam
37
+ - shn
38
+ - su
39
+ - ta
40
+ - th
41
+ - tl
42
+ - tet
43
+ - vi
44
+ - war
45
  task_ids:
46
  - language-modeling
47
  - masked-language-modeling
 
81
  - name: cbk_zam
82
  num_bytes: 2033238
83
  num_examples: 3285
84
+ - name: ceb
85
+ num_bytes: 4572804909
86
+ num_examples: 6302896
87
  - name: gor
88
  num_bytes: 6239133
89
  num_examples: 15359
 
153
  - name: war
154
  num_bytes: 454304567
155
  num_examples: 1266394
156
+ download_size: 10940051715
157
+ dataset_size: 10923905689
158
  - config_name: seawiki_dedup_all
159
  features:
160
  - name: url
 
182
  - name: cbk_zam
183
  num_bytes: 1579651
184
  num_examples: 2242
185
+ - name: ceb
186
+ num_bytes: 4346511152
187
+ num_examples: 5815254
188
  - name: gor
189
  num_bytes: 6217480
190
  num_examples: 15290
 
254
  - name: war
255
  num_bytes: 454266479
256
  num_examples: 1266204
257
+ download_size: 10701952694
258
+ dataset_size: 10686874347
259
  - config_name: seawiki_with_countries_all
260
  features:
261
  - name: url
 
352
  - name: phl_pag
353
  num_bytes: 1370162
354
  num_examples: 2665
355
+ - name: phl_ceb
356
+ num_bytes: 4572804909
357
+ num_examples: 6302896
358
  - name: sgp_ms
359
  num_bytes: 419662356
360
  num_examples: 368628
 
376
  - name: vnm_vi
377
  num_bytes: 1603057632
378
  num_examples: 1288680
379
+ download_size: 10940051715
380
+ dataset_size: 13074580032
381
  - config_name: seawiki_with_countries_dedup_all
382
  features:
383
  - name: url
 
474
  - name: phl_pag
475
  num_bytes: 764869
476
  num_examples: 1108
477
+ - name: phl_ceb
478
+ num_bytes: 4346511152
479
+ num_examples: 5815254
480
  - name: sgp_ms
481
  num_bytes: 414783365
482
  num_examples: 348045
 
498
  - name: vnm_vi
499
  num_bytes: 1602828123
500
  num_examples: 1287910
501
+ download_size: 10701952694
502
+ dataset_size: 12822597856
503
  ---
504
 
505
  # **SEA Wikipedia Data Repository**
506
  ---
507
+ Welcome to SEA Wikipedia Data Repository. The datasets are extracted from [Wikipedia HF](https://huggingface.co/datasets/wikipedia) and processed using the scripts available in this repository for reproducibility purpose. Since Wikipedia iteslf has license [cc-by-sa 4.0](https://en.wikipedia.org/wiki/Wikipedia:Copyrights), we decided to follow this instead of Wikipedia HF data has of cc-by-sa 3.0 since it gives more rights to initial author/contributor.
 
 
508
 
509
  # Getting Started #
510
  ### To read the datasets directly ###
 
535
 
536
  # **FAQS**
537
  ### What are the available languages provided in dataset and from which country?
538
+ You may check the following tables to understand the current coverage of this dataset (languages, countries, data size & volume). All tables are sorted by the leftmost column.
539
 
540
  #### 1. Table of Countries and its Country Code
541
  | Country Code | Country Name | Wiki Info |
 
553
  | vnm | Vietnam | [Wiki Link](https://en.wikipedia.org/wiki/Vietnam) |
554
 
555
  #### 2. Table of Languages and Countries of its speakers
556
+ | ISO 639-3 Lang Code | Dataset Lang Code | Lang Name | Country Codes Spoken | Wiki Info | Total Data | Total Size (MiB rounded) |
557
+ | :---: | :---: | :---: | :---: | :--- | ---: | ---: |
558
+ | ace | ace | Acehnese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Acehnese_language) | 12979 | 4.72 |
559
+ | ban | ban | Balinese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Balinese_language) | 20611 | 17.19 |
560
+ | bcl | bcl | Central Bicolano | phl | [Wiki Link](https://en.wikipedia.org/wiki/Banjarese_language) | 14079 | 19.05 |
561
+ | bjn | bjn | Banjarese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banjarese_language) | 10503 | 6.47 |
562
+ | bug | bug | Buginese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Buginese_language) | 9969 | 2.08 |
563
+ | bur | my | Burmese | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Burmese_language) | 108819 | 298.49 |
564
+ | cbk | cbk_zam | Zamboanga Chavacano/Chavacano | phl | [Wiki Link](https://en.wikipedia.org/wiki/Chavacano) | 2242 | 1.51 |
565
+ | ceb | ceb | Cebuano | phl | [Wiki Link](https://en.wikipedia.org/wiki/Central_Bicolano_language) | 5815254 | 4,145.16 |
566
+ | gor | gor | Gorontalo | idn | [Wiki Link](https://en.wikipedia.org/wiki/Gorontalo_language) | 15290 | 5.93 |
567
+ | ilo | ilo | Ilokano | phl | [Wiki Link](https://en.wikipedia.org/wiki/Ilocano_language) | 15369 | 15.94 |
568
+ | ind | id | Indonesian | idn | [Wiki Link](https://en.wikipedia.org/wiki/Indonesian_language) | 662443 | 1,066.10 |
569
+ | jav | jv | Javanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Javanese_language) | 73080 | 68.66 |
570
+ | khm | km | Khmer | khm | [Wiki Link](https://en.wikipedia.org/wiki/Khmer_language) | 11466 | 97.94 |
571
+ | lao | lo | Lao | lao | [Wiki Link](https://en.wikipedia.org/wiki/Lao_language) | 4897 | 14.22 |
572
+ | mad | mad | Madurese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Madurese_language) | 1192 | 1.54 |
573
+ | may | ms | Malay | mys, sgp, brn, idn | [Wiki Link](https://en.wikipedia.org/wiki/Malay_language) | 348045 | 395.57 |
574
+ | min | min | Minangkabau | idn | [Wiki Link](https://en.wikipedia.org/wiki/Minangkabau_language) | 225972 | 111.31 |
575
+ | mnw | mnw | Mon | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Mon_language) | 3271 | 45.05 |
576
+ | nia | nia | Nias | idn | [Wiki Link](https://en.wikipedia.org/wiki/Nias_language) | 1714 | 2.05 |
577
+ | pag | pag | Pangasinan | phl | [Wiki Link](https://en.wikipedia.org/wiki/Pangasinan_language) | 1108 | 0.73 |
578
+ | pam | pam | Kapampangan | phl | [Wiki Link](https://en.wikipedia.org/wiki/Kapampangan_language) | 8932 | 7.83 |
579
+ | shn | shn | Shan | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Shan_language) | 13662 | 32.06 |
580
+ | sun | su | Sundanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Sundanese_language) | 61529 | 45.31 |
581
+ | tam | ta | Tamil | mys, sgp | [Wiki Link](https://en.wikipedia.org/wiki/Tamil_language) | 160580 | 771.58 |
582
+ | tgl | tl | Tagalog | phl | [Wiki Link](https://en.wikipedia.org/wiki/Tagalog_language) | 45121 | 81.34 |
583
+ | tha | th | Thai | tha | [Wiki Link](https://en.wikipedia.org/wiki/Thai_language) | 159666 | 965.95 |
584
+ | tet | tet | Tetum | tls, idn | [Wiki Link](https://en.wikipedia.org/wiki/Tetum_language) | 1464 | 1.38 |
585
+ | vie | vi | Vietnamese | vnm | [Wiki Link](https://en.wikipedia.org/wiki/Vietnamese_language) | 1287910 | 1,528.58 |
586
+ | war | war | Waray | phl | [Wiki Link](https://en.wikipedia.org/wiki/Waray_language) | 1266204 | 433.22 |
587
+ | (dialect) | map_bms | Banyumasan <br>(Dialect of Javanese) | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banyumasan_dialect) | 11839 | 4.83 |
588
 
589
 
590
  #### 3. Table of Token Statistics for Covered Languages
 
598
  | bjn | 1,935,505 | 184.28115776444827 | 2 | 30,170 | [36.0, 38.0, 39.0, 40.0, 42.0, 51.0, 82.0, 151.0, 367.0] |
599
  | bug | 553,693 | 55.54147858360919 | 1 | 13,951 | [31.0, 42.0, 43.0, 46.0, 48.0, 50.0, 52.0, 55.0, 57.0] |
600
  | cbk_zam | 402,703 | 179.6177520071365 | 2 | 6,494 | [35.0, 41.2, 56.0, 69.0, 90.0, 120.0, 138.0, 155.0, 294.9] |
601
+ | ceb | 1,319,601,771 | 226.92074516435568 | 4 | 221,802 | [93.0, 108.0, 123.0, 136.0, 163.0, 207.0, 278.0, 377.0, 426.0] |
602
  | gor | 1,575,766 | 103.05860039241334 | 2 | 5,525 | [55.0, 58.0, 60.0, 62.0, 64.0, 66.0, 69.0, 75.0, 96.0] |
603
  | id | 325,411,713 | 491.22975561670967 | 1 | 198,597 | [54.0, 93.0, 123.0, 145.0, 180.0, 226.0, 332.0, 543.0, 1068.0] |
604
  | ilo | 5,593,491 | 363.94632051532307 | 17 | 18,202 | [59.0, 80.0, 91.0, 111.0, 152.0, 213.0, 303.0, 461.0, 856.0] |
 
621
  | th | 330,964,733 | 2,072.8566695476807 | 1 | 289,150 | [231.0, 390.0, 546.0, 727.0, 969.0, 1276.0, 1741.0, 2533.0, 4361.0] |
622
  | tl | 27,789,730 | 615.8934864032269 | 7 | 60,728 | [73.0, 116.0, 161.0, 214.0, 281.0, 360.0, 465.0, 666.0, 1136.0] |
623
  | vi | 546,481,258 | 424.3163404275143 | 3 | 246,463 | [46.0, 64.0, 71.0, 80.0, 86.0, 92.0, 120.0, 240.0, 824.0] |
624
+ | war | 117,438,315 | 92.74833676090108 | 1 | 25,689 | [60.0, 77.0, 81.0, 84.0, 87.0, 90.0, 94.0, 99.0, 110.0] |
625
 
626
  Some other languages in SEA that are already exists its Wiki Index at Wikimedia might be missing from this list. Any lang update PR is greatly appreciated!
627
 
 
631
  2. Furthermore, the ```title``` and ```text``` data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars/UTF-8 chars validated). You may check this [ ```dedup_raw_wiki_data.py```](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) script to understand its implementation.
632
 
633
  ### How do I extract new Wikipedia Dataset of SEA languages?
634
+ Please refer to the corresponding Github Repo for more detailed info [SEA Wiki Github Source Code](https://github.com/sabilmakbar/sea_wiki)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
635
 
636
  ## Citation Info:
637
  ```
extract_raw_wiki_data_batched.py ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ '''
2
+ Script on Generating Wikipedia Data that are dumped into https://dumps.wikimedia.org/
3
+ More info can be read on https://huggingface.co/datasets/wikipedia
4
+ -------------------
5
+ Check here to see available indexed data: https://dumps.wikimedia.org/backup-index.html
6
+ Also check here to see language meta from its code: https://meta.wikimedia.org/wiki/List_of_Wikipedias
7
+ '''
8
+
9
+ import os, gc
10
+ import logging
11
+ import argparse
12
+
13
+ import pandas as pd
14
+ from datasets import load_dataset
15
+
16
+
17
+ def set_logger():
18
+ # Set up the logger
19
+ logging.basicConfig(
20
+ level=logging.INFO, # Set the desired logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
21
+ format='%(asctime)s [%(levelname)s]: %(message)s', # Customize the log message format
22
+ datefmt='%Y-%m-%d %H:%M:%S' # Customize the date/time format
23
+ )
24
+
25
+ # Create a file handler to write logs into a file
26
+ file_handler = logging.FileHandler('app.log')
27
+
28
+ # Set the log level for the file handler
29
+ file_handler.setLevel(logging.INFO)
30
+
31
+ # Create a formatter for the file handler (customize the log format for the file)
32
+ file_formatter = logging.Formatter('%(asctime)s [%(levelname)s]: %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
33
+ file_handler.setFormatter(file_formatter)
34
+
35
+ logger = logging.getLogger("Wiki Dataset Generation")
36
+ logger.addHandler(file_handler)
37
+
38
+ return logger
39
+
40
+
41
+ #only executed if called directly
42
+ if __name__ == "__main__":
43
+ parser = argparse.ArgumentParser()
44
+
45
+ parser.add_argument("--lang-id", help="Lang ID from Wikipedia Data to extract")
46
+
47
+ parser.add_argument("--date-ver", help="Date of Wikipedia Data (YYYYMMDD) generation to extract")
48
+
49
+ #default: all
50
+ parser.add_argument("--split-extr", help="""Split extraction config for choosing
51
+ subsets of data to process. It follows python list slicing string args""",
52
+ default=":")
53
+
54
+ #default: all
55
+ parser.add_argument("--force_rerun_split", help="""Flag to identify whether to check existing
56
+ splits or forcing to re-create it""",
57
+ default=False)
58
+
59
+ parser.add_argument("--save-dir-path", help="""Relative dir path of saved Wikipedia CSV data
60
+ to the `extract_raw_wiki_data.py` script dir""",
61
+ default=os.path.dirname(os.path.abspath(__file__)))
62
+
63
+ args = parser.parse_args()
64
+
65
+
66
+ dset_name = "sea_loader_batched/wiki_loader.py"
67
+
68
+ logger = set_logger()
69
+ logger.info("Parsing arguments...")
70
+
71
+ lang_id = args.lang_id
72
+ date_ver = args.date_ver
73
+ generated_split_extraction = args.split_extr
74
+ force_rerun_split_generation = args.force_rerun_split
75
+ save_dir = args.save_dir_path
76
+
77
+ logger.info("Loading the dataset from Wikipedia...")
78
+ df = load_dataset(dset_name, language=lang_id, date=date_ver, beam_runner='DirectRunner',
79
+ split="train", subset_file_to_process=generated_split_extraction,
80
+ force_rerun_split=force_rerun_split_generation).to_pandas()
81
+ logger.info("Loading done!")
82
+ logger.info(f"#Data collected: {df.shape[0]}")
83
+ logger.info("Saving dataset raw form...")
84
+ df.to_csv(f"{save_dir}/wiki_{lang_id}_{date_ver}_raw_dataset_splitted.csv", index=False)
85
+
86
+ del df
87
+ gc.collect()
sea_loader_batched/concat_data.py ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import argparse
3
+ import logging
4
+
5
+ import pandas as pd
6
+
7
+
8
+ def set_logger():
9
+ # Set up the logger
10
+ logging.basicConfig(
11
+ level=logging.INFO, # Set the desired logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
12
+ format='%(asctime)s [%(levelname)s]: %(message)s', # Customize the log message format
13
+ datefmt='%Y-%m-%d %H:%M:%S' # Customize the date/time format
14
+ )
15
+
16
+ # Create a file handler to write logs into a file
17
+ file_handler = logging.FileHandler('app.log')
18
+
19
+ # Set the log level for the file handler
20
+ file_handler.setLevel(logging.INFO)
21
+
22
+ # Create a formatter for the file handler (customize the log format for the file)
23
+ file_formatter = logging.Formatter('%(asctime)s [%(levelname)s]: %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
24
+ file_handler.setFormatter(file_formatter)
25
+
26
+ logger = logging.getLogger("Wiki Dataset Generation")
27
+ logger.addHandler(file_handler)
28
+
29
+ return logger
30
+
31
+
32
+ #only executed if called directly
33
+ if __name__ == "__main__":
34
+ parser = argparse.ArgumentParser()
35
+
36
+ parser.add_argument("--load-dir-path", help="""Relative load dir path of saved batch Wikipedia CSV data
37
+ to the `concat_data.py` script dir""",
38
+ default=os.path.dirname(os.path.abspath(__file__)))
39
+
40
+ parser.add_argument("--save-dir-path", help="""Relative save dir path of concatted Wikipedia CSV data
41
+ to the `concat_data.py` script dir""",
42
+ default=os.path.dirname(os.path.abspath(__file__)))
43
+
44
+ args = parser.parse_args()
45
+
46
+
47
+ logger = set_logger()
48
+ logger.info("Parsing arguments...")
49
+
50
+ load_dir = args.load_dir_path
51
+ save_dir = args.save_dir_path
52
+
53
+ csv_list_files = [os.path.join(load_dir, _filename) for _filename in os.listdir(load_dir) if _filename.endswith(".csv")]
54
+
55
+ for idx, path in enumerate(csv_list_files):
56
+ logger.info(f"Processinng data {idx+1} out of {len(csv_list_files)}")
57
+ if idx == 0:
58
+ df = pd.read_csv(path)
59
+ else:
60
+ df = pd.concat([df, pd.read_csv(path)], ignore_index=True)
61
+
62
+ logger.info("Loading done!")
63
+ logger.info(f"#Data collected: {df.shape[0]}")
64
+ logger.info("Saving dataset raw form after concatted...")
65
+ df.to_csv(f"{save_dir}.csv", index=False)
sea_loader_batched/wiki_loader.py ADDED
@@ -0,0 +1,1253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Wikipedia dataset containing cleaned articles of all languages."""
18
+
19
+ import os
20
+ import sys
21
+ import bz2
22
+ import codecs
23
+ import json
24
+ import re
25
+ import xml.etree.cElementTree as etree
26
+ from urllib.parse import quote
27
+
28
+ import datasets
29
+
30
+
31
+ logger = datasets.logging.get_logger(__name__)
32
+ logger.setLevel(datasets.logging.DEBUG)
33
+
34
+ _CITATION = """\
35
+ @ONLINE {wikidump,
36
+ author = {Wikimedia Foundation},
37
+ title = {Wikimedia Downloads},
38
+ url = {https://dumps.wikimedia.org}
39
+ }
40
+ """
41
+
42
+ _DESCRIPTION = """\
43
+ Wikipedia dataset containing cleaned articles of all languages.
44
+ The datasets are built from the Wikipedia dump
45
+ (https://dumps.wikimedia.org/) with one split per language. Each example
46
+ contains the content of one full Wikipedia article with cleaning to strip
47
+ markdown and unwanted sections (references, etc.).
48
+ """
49
+
50
+ _LICENSE = (
51
+ "This work is licensed under the Creative Commons Attribution-ShareAlike "
52
+ "3.0 Unported License. To view a copy of this license, visit "
53
+ "http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to "
54
+ "Creative Commons, PO Box 1866, Mountain View, CA 94042, USA."
55
+ )
56
+
57
+ # Source: https://en.wikipedia.org/wiki/List_of_Wikipedias (accessed 3/1/2019)
58
+ # Removed because no articles: hz.
59
+ WIKIPEDIA_LANGUAGES = [
60
+ "aa",
61
+ "ab",
62
+ "ace",
63
+ "ady",
64
+ "af",
65
+ "ak",
66
+ "als",
67
+ "am",
68
+ "an",
69
+ "ang",
70
+ "ar",
71
+ "arc",
72
+ "arz",
73
+ "as",
74
+ "ast",
75
+ "atj",
76
+ "av",
77
+ "ay",
78
+ "az",
79
+ "azb",
80
+ "ba",
81
+ "bar",
82
+ "bat-smg",
83
+ "bcl",
84
+ "be",
85
+ "be-x-old",
86
+ "bg",
87
+ "bh",
88
+ "bi",
89
+ "bjn",
90
+ "bm",
91
+ "bn",
92
+ "bo",
93
+ "bpy",
94
+ "br",
95
+ "bs",
96
+ "bug",
97
+ "bxr",
98
+ "ca",
99
+ "cbk-zam",
100
+ "cdo",
101
+ "ce",
102
+ "ceb",
103
+ "ch",
104
+ "cho",
105
+ "chr",
106
+ "chy",
107
+ "ckb",
108
+ "co",
109
+ "cr",
110
+ "crh",
111
+ "cs",
112
+ "csb",
113
+ "cu",
114
+ "cv",
115
+ "cy",
116
+ "da",
117
+ "de",
118
+ "din",
119
+ "diq",
120
+ "dsb",
121
+ "dty",
122
+ "dv",
123
+ "dz",
124
+ "ee",
125
+ "el",
126
+ "eml",
127
+ "en",
128
+ "eo",
129
+ "es",
130
+ "et",
131
+ "eu",
132
+ "ext",
133
+ "fa",
134
+ "ff",
135
+ "fi",
136
+ "fiu-vro",
137
+ "fj",
138
+ "fo",
139
+ "fr",
140
+ "frp",
141
+ "frr",
142
+ "fur",
143
+ "fy",
144
+ "ga",
145
+ "gag",
146
+ "gan",
147
+ "gd",
148
+ "gl",
149
+ "glk",
150
+ "gn",
151
+ "gom",
152
+ "gor",
153
+ "got",
154
+ "gu",
155
+ "gv",
156
+ "ha",
157
+ "hak",
158
+ "haw",
159
+ "he",
160
+ "hi",
161
+ "hif",
162
+ "ho",
163
+ "hr",
164
+ "hsb",
165
+ "ht",
166
+ "hu",
167
+ "hy",
168
+ "ia",
169
+ "id",
170
+ "ie",
171
+ "ig",
172
+ "ii",
173
+ "ik",
174
+ "ilo",
175
+ "inh",
176
+ "io",
177
+ "is",
178
+ "it",
179
+ "iu",
180
+ "ja",
181
+ "jam",
182
+ "jbo",
183
+ "jv",
184
+ "ka",
185
+ "kaa",
186
+ "kab",
187
+ "kbd",
188
+ "kbp",
189
+ "kg",
190
+ "ki",
191
+ "kj",
192
+ "kk",
193
+ "kl",
194
+ "km",
195
+ "kn",
196
+ "ko",
197
+ "koi",
198
+ "krc",
199
+ "ks",
200
+ "ksh",
201
+ "ku",
202
+ "kv",
203
+ "kw",
204
+ "ky",
205
+ "la",
206
+ "lad",
207
+ "lb",
208
+ "lbe",
209
+ "lez",
210
+ "lfn",
211
+ "lg",
212
+ "li",
213
+ "lij",
214
+ "lmo",
215
+ "ln",
216
+ "lo",
217
+ "lrc",
218
+ "lt",
219
+ "ltg",
220
+ "lv",
221
+ "mai",
222
+ "map-bms",
223
+ "mdf",
224
+ "mg",
225
+ "mh",
226
+ "mhr",
227
+ "mi",
228
+ "min",
229
+ "mk",
230
+ "ml",
231
+ "mn",
232
+ "mr",
233
+ "mrj",
234
+ "ms",
235
+ "mt",
236
+ "mus",
237
+ "mwl",
238
+ "my",
239
+ "myv",
240
+ "mzn",
241
+ "na",
242
+ "nah",
243
+ "nap",
244
+ "nds",
245
+ "nds-nl",
246
+ "ne",
247
+ "new",
248
+ "ng",
249
+ "nl",
250
+ "nn",
251
+ "no",
252
+ "nov",
253
+ "nrm",
254
+ "nso",
255
+ "nv",
256
+ "ny",
257
+ "oc",
258
+ "olo",
259
+ "om",
260
+ "or",
261
+ "os",
262
+ "pa",
263
+ "pag",
264
+ "pam",
265
+ "pap",
266
+ "pcd",
267
+ "pdc",
268
+ "pfl",
269
+ "pi",
270
+ "pih",
271
+ "pl",
272
+ "pms",
273
+ "pnb",
274
+ "pnt",
275
+ "ps",
276
+ "pt",
277
+ "qu",
278
+ "rm",
279
+ "rmy",
280
+ "rn",
281
+ "ro",
282
+ "roa-rup",
283
+ "roa-tara",
284
+ "ru",
285
+ "rue",
286
+ "rw",
287
+ "sa",
288
+ "sah",
289
+ "sat",
290
+ "sc",
291
+ "scn",
292
+ "sco",
293
+ "sd",
294
+ "se",
295
+ "sg",
296
+ "sh",
297
+ "si",
298
+ "simple",
299
+ "sk",
300
+ "sl",
301
+ "sm",
302
+ "sn",
303
+ "so",
304
+ "sq",
305
+ "sr",
306
+ "srn",
307
+ "ss",
308
+ "st",
309
+ "stq",
310
+ "su",
311
+ "sv",
312
+ "sw",
313
+ "szl",
314
+ "ta",
315
+ "tcy",
316
+ "te",
317
+ "tet",
318
+ "tg",
319
+ "th",
320
+ "ti",
321
+ "tk",
322
+ "tl",
323
+ "tn",
324
+ "to",
325
+ "tpi",
326
+ "tr",
327
+ "ts",
328
+ "tt",
329
+ "tum",
330
+ "tw",
331
+ "ty",
332
+ "tyv",
333
+ "udm",
334
+ "ug",
335
+ "uk",
336
+ "ur",
337
+ "uz",
338
+ "ve",
339
+ "vec",
340
+ "vep",
341
+ "vi",
342
+ "vls",
343
+ "vo",
344
+ "wa",
345
+ "war",
346
+ "wo",
347
+ "wuu",
348
+ "xal",
349
+ "xh",
350
+ "xmf",
351
+ "yi",
352
+ "yo",
353
+ "za",
354
+ "zea",
355
+ "zh",
356
+ "zh-classical",
357
+ "zh-min-nan",
358
+ "zh-yue",
359
+ "zu",
360
+ ]
361
+
362
+ # Source: for each Wikipedia language code (example shown for "ab"), aliases for namespaces -2 and 6 accessed via this API call:
363
+ # https://ab.wikipedia.org/w/api.php?action=query&meta=siteinfo&siprop=namespacealiases|namespaces&format=json&formatversion=2 (accessed 12/21/2021)
364
+ MEDIA_ALIASES = {
365
+ "ab": ["Медиа", "Файл", "Афаил", "Амедиа", "Изображение"],
366
+ "ace": ["Beureukaih", "Gambar", "Alat", "Berkas"],
367
+ "ady": ["Медиа"],
368
+ "af": ["Lêer", "Beeld"],
369
+ "als": ["Medium", "Datei", "Bild"],
370
+ "am": ["ፋይል", "ስዕል"],
371
+ "an": ["Imachen", "Imagen"],
372
+ "ang": ["Ymele", "Biliþ"],
373
+ "ar": ["ميديا", "صورة", "وسائط", "ملف"],
374
+ "arc": ["ܠܦܦܐ", "ܡܝܕܝܐ"],
375
+ "arz": ["ميديا", "صورة", "وسائط", "ملف"],
376
+ "as": ["চিত্ৰ", "चित्र", "চিত্র", "মাধ্যম"],
377
+ "ast": ["Imaxen", "Ficheru", "Imaxe", "Archivu", "Imagen", "Medios"],
378
+ "atj": ["Tipatcimoctakewin", "Natisinahikaniwoc"],
379
+ "av": ["Медиа", "Файл", "Изображение"],
380
+ "ay": ["Medio", "Archivo", "Imagen"],
381
+ "az": ["Mediya", "Şəkil", "Fayl"],
382
+ "azb": ["رسانه", "تصویر", "مدیا", "فایل", "رسانه‌ای"],
383
+ "ba": ["Медиа", "Рәсем", "Файл", "Изображение"],
384
+ "bar": ["Medium", "Datei", "Bild"],
385
+ "bat-smg": ["Vaizdas", "Medėjė", "Abruozdielis"],
386
+ "bcl": ["Medio", "Ladawan"],
387
+ "be": ["Мультымедыя", "Файл", "Выява"],
388
+ "be-x-old": ["Мэдыя", "Файл", "Выява"],
389
+ "bg": ["Медия", "Файл", "Картинка"],
390
+ "bh": ["मीडिया", "चित्र"],
391
+ "bjn": ["Barakas", "Gambar", "Berkas"],
392
+ "bm": ["Média", "Fichier"],
393
+ "bn": ["চিত্র", "মিডিয়া"],
394
+ "bpy": ["ছবি", "মিডিয়া"],
395
+ "br": ["Skeudenn", "Restr"],
396
+ "bs": ["Mediji", "Slika", "Datoteka", "Medija"],
397
+ "bug": ["Gambar", "Berkas"],
398
+ "bxr": ["Файл", "Меди", "Изображение"],
399
+ "ca": ["Fitxer", "Imatge"],
400
+ "cbk-zam": ["Medio", "Archivo", "Imagen"],
401
+ "cdo": ["文件", "媒體", "圖像", "檔案"],
402
+ "ce": ["Хlум", "Медиа", "Сурт", "Файл", "Медйа", "Изображение"],
403
+ "ceb": ["Payl", "Medya", "Imahen"],
404
+ "ch": ["Litratu"],
405
+ "ckb": ["میدیا", "پەڕگە"],
406
+ "co": ["Immagine"],
407
+ "crh": ["Медиа", "Resim", "Файл", "Fayl", "Ресим"],
408
+ "cs": ["Soubor", "Média", "Obrázok"],
409
+ "csb": ["Òbrôzk", "Grafika"],
410
+ "cu": ["Видъ", "Ви́дъ", "Дѣло", "Срѣдьства"],
411
+ "cv": ["Медиа", "Ӳкерчĕк", "Изображение"],
412
+ "cy": ["Delwedd"],
413
+ "da": ["Billede", "Fil"],
414
+ "de": ["Medium", "Datei", "Bild"],
415
+ "din": ["Ciɛl", "Apamduööt"],
416
+ "diq": ["Medya", "Dosya"],
417
+ "dsb": ["Wobraz", "Dataja", "Bild", "Medija"],
418
+ "dty": ["चित्र", "मिडिया"],
419
+ "dv": ["ފައިލު", "މީޑިއާ", "ފައިލް"],
420
+ "el": ["Εικόνα", "Αρχείο", "Μέσο", "Μέσον"],
421
+ "eml": ["Immagine"],
422
+ "eo": ["Dosiero", "Aŭdvidaĵo"],
423
+ "es": ["Medio", "Archivo", "Imagen"],
424
+ "et": ["Pilt", "Fail", "Meedia"],
425
+ "eu": ["Irudi", "Fitxategi"],
426
+ "ext": ["Archivu", "Imagen", "Mediu"],
427
+ "fa": ["رسانه", "تصویر", "مدیا", "پرونده", "رسانه‌ای"],
428
+ "ff": ["Média", "Fichier"],
429
+ "fi": ["Kuva", "Tiedosto"],
430
+ "fiu-vro": ["Pilt", "Meediä"],
431
+ "fo": ["Miðil", "Mynd"],
432
+ "fr": ["Média", "Fichier"],
433
+ "frp": ["Émâge", "Fichiér", "Mèdia"],
434
+ "frr": ["Medium", "Datei", "Bild"],
435
+ "fur": ["Immagine", "Figure"],
436
+ "fy": ["Ofbyld"],
437
+ "ga": ["Íomhá", "Meán"],
438
+ "gag": ["Mediya", "Medya", "Resim", "Dosya", "Dosye"],
439
+ "gan": ["媒体文件", "文件", "文檔", "档案", "媒體", "图像", "圖像", "媒体", "檔案"],
440
+ "gd": ["Faidhle", "Meadhan"],
441
+ "gl": ["Imaxe", "Ficheiro", "Arquivo", "Imagem"],
442
+ "glk": ["رسانه", "تصویر", "پرونده", "فاىل", "رسانه‌ای", "مديا"],
443
+ "gn": ["Medio", "Imagen", "Ta'ãnga"],
444
+ "gom": ["माध्यम", "मिडिया", "फायल"],
445
+ "gor": ["Gambar", "Berkas"],
446
+ "got": ["𐍆𐌴𐌹𐌻𐌰"],
447
+ "gu": ["દ્રશ્ય-શ્રાવ્ય (મિડિયા)", "દ્રશ્ય-શ્રાવ્ય_(મિડિયા)", "ચિત્ર"],
448
+ "gv": ["Coadan", "Meanyn"],
449
+ "hak": ["文件", "媒體", "圖像", "檔案"],
450
+ "haw": ["Kiʻi", "Waihona", "Pāpaho"],
451
+ "he": ["תמונה", "קו", "מדיה", "קובץ"],
452
+ "hi": ["��ीडिया", "चित्र"],
453
+ "hif": ["file", "saadhan"],
454
+ "hr": ["Mediji", "DT", "Slika", "F", "Datoteka"],
455
+ "hsb": ["Wobraz", "Dataja", "Bild"],
456
+ "ht": ["Imaj", "Fichye", "Medya"],
457
+ "hu": ["Kép", "Fájl", "Média"],
458
+ "hy": ["Պատկեր", "Մեդիա"],
459
+ "ia": ["Imagine", "Multimedia"],
460
+ "id": ["Gambar", "Berkas"],
461
+ "ig": ["Nká", "Midia", "Usòrò", "Ákwúkwó orünotu", "Ákwúkwó_orünotu"],
462
+ "ii": ["媒体文件", "文件", "档案", "图像", "媒体"],
463
+ "ilo": ["Midia", "Papeles"],
464
+ "inh": ["Медиа", "Файл", "Изображение"],
465
+ "io": ["Imajo", "Arkivo"],
466
+ "is": ["Miðill", "Mynd"],
467
+ "it": ["Immagine"],
468
+ "ja": ["メディア", "ファイル", "画像"],
469
+ "jbo": ["velsku", "datnyvei"],
470
+ "jv": ["Barkas", "Medhia", "Gambar", "Médhia"],
471
+ "ka": ["მედია", "სურათი", "ფაილი"],
472
+ "kaa": ["Swret", "Таспа", "سۋرەت", "Taspa", "Su'wret", "Сурет", "تاسپا"],
473
+ "kab": ["Tugna"],
474
+ "kbd": ["Медиа", "Файл"],
475
+ "kbp": ["Média", "Fichier"],
476
+ "kg": ["Fisye"],
477
+ "kk": ["Swret", "سۋرەت", "Таспа", "Taspa", "Сурет", "تاسپا"],
478
+ "kl": ["Billede", "Fiileq", "Fil"],
479
+ "km": ["ឯកសារ", "រូបភាព", "មេឌា", "មីឌា"],
480
+ "kn": ["ಚಿತ್ರ", "ಮೀಡಿಯ"],
481
+ "ko": ["미디어", "파일", "그림"],
482
+ "koi": ["Медиа", "Файл", "Изображение"],
483
+ "krc": ["Медиа", "Файл", "Изображение"],
484
+ "ks": ["میڈیا", "فَیِل"],
485
+ "ksh": ["Beld", "Meedije", "Medie", "Belld", "Medium", "Datei", "Meedijum", "Bild"],
486
+ "ku": ["میدیا", "پەڕگە", "Medya", "Wêne"],
487
+ "kv": ["Медиа", "Файл", "Изображение"],
488
+ "kw": ["Restren"],
489
+ "ky": ["Медиа", "Файл"],
490
+ "la": ["Imago", "Fasciculus"],
491
+ "lad": ["Dossia", "Medya", "Archivo", "Dosya", "Imagen", "Meddia"],
492
+ "lb": ["Fichier", "Bild"],
493
+ "lbe": ["Медиа", "Сурат", "Изображение"],
494
+ "lez": ["Медиа", "Mediya", "Файл", "Şəkil", "Изображение"],
495
+ "lfn": ["Fix"],
496
+ "li": ["Afbeelding", "Plaetje", "Aafbeilding"],
497
+ "lij": ["Immaggine", "Immagine"],
498
+ "lmo": ["Immagine", "Imàjine", "Archivi"],
499
+ "ln": ["Média", "Fichier"],
500
+ "lo": ["ສື່ອ", "ສື່", "ຮູບ"],
501
+ "lrc": ["رسانه", "تصویر", "رسانه‌ای", "جانیا", "أسگ", "ڤارئسگأر"],
502
+ "lt": ["Vaizdas", "Medija"],
503
+ "ltg": ["Medeja", "Fails"],
504
+ "lv": ["Attēls"],
505
+ "mai": ["मेडिया", "फाइल"],
506
+ "map-bms": ["Barkas", "Medhia", "Gambar", "Médhia"],
507
+ "mdf": ["Медиа", "Няйф", "Изображение"],
508
+ "mg": ["Rakitra", "Sary", "Média"],
509
+ "mhr": ["Медиа", "Файл", "Изображение"],
510
+ "min": ["Gambar", "Berkas"],
511
+ "mk": ["Податотека", "Медија", "Медиум", "Слика"],
512
+ "ml": ["പ്രമാണം", "ചി", "മീഡിയ", "പ്ര", "ചിത്രം"],
513
+ "mn": ["Медиа", "Файл", "Зураг"],
514
+ "mr": ["चित्र", "मिडिया"],
515
+ "mrj": ["Медиа", "Файл", "Изображение"],
516
+ "ms": ["Fail", "Imej"],
517
+ "mt": ["Midja", "Medja", "Stampa"],
518
+ "mwl": ["Multimédia", "Fexeiro", "Ficheiro", "Arquivo", "Imagem"],
519
+ "my": ["ဖိုင်", "မီဒီယာ"],
520
+ "myv": ["Медия", "Артовкс", "Изображение"],
521
+ "mzn": ["رسانه", "تصویر", "مه‌دیا", "مدیا", "پرونده", "رسانه‌ای"],
522
+ "nah": ["Mēdiatl", "Īxiptli", "Imagen"],
523
+ "nap": ["Fiùra", "Immagine"],
524
+ "nds": ["Datei", "Bild"],
525
+ "nds-nl": ["Ofbeelding", "Afbeelding", "Bestaand"],
526
+ "ne": ["मीडिया", "चित्र"],
527
+ "new": ["किपा", "माध्यम"],
528
+ "nl": ["Bestand", "Afbeelding"],
529
+ "nn": ["Fil", "Bilde", "Filpeikar"],
530
+ "no": ["Fil", "Medium", "Bilde"],
531
+ "nov": [],
532
+ "nrm": ["Média", "Fichier"],
533
+ "nso": ["Seswantšho"],
534
+ "nv": ["Eʼelyaaígíí"],
535
+ "oc": ["Imatge", "Fichièr", "Mèdia"],
536
+ "olo": ["Kuva", "Medii", "Failu"],
537
+ "or": ["ମାଧ୍ୟମ", "ଫାଇଲ"],
538
+ "os": ["Ныв", "Медиа", "Файл", "Изображение"],
539
+ "pa": ["ਤਸਵੀਰ", "ਮੀਡੀਆ"],
540
+ "pcd": ["Média", "Fichier"],
541
+ "pdc": ["Medium", "Datei", "Bild", "Feil"],
542
+ "pfl": ["Dadai", "Medium", "Datei", "Bild"],
543
+ "pi": ["मीडिया", "पटिमा"],
544
+ "pl": ["Plik", "Grafika"],
545
+ "pms": ["Figura", "Immagine"],
546
+ "pnb": ["میڈیا", "تصویر", "فائل"],
547
+ "pnt": ["Εικόνα", "Αρχείον", "Εικόναν", "Μέσον"],
548
+ "ps": ["انځور", "رسنۍ", "دوتنه"],
549
+ "pt": ["Multimédia", "Ficheiro", "Arquivo", "Imagem"],
550
+ "qu": ["Midya", "Imagen", "Rikcha"],
551
+ "rm": ["Multimedia", "Datoteca"],
552
+ "rmy": ["Fişier", "Mediya", "Chitro", "Imagine"],
553
+ "ro": ["Fişier", "Imagine", "Fișier"],
554
+ "roa-rup": ["Fişier", "Imagine", "Fișier"],
555
+ "roa-tara": ["Immagine"],
556
+ "ru": ["Медиа", "Файл", "Изображение"],
557
+ "rue": ["Медіа", "Медиа", "Файл", "Изображение", "Зображення"],
558
+ "rw": ["Dosiye", "Itangazamakuru"],
559
+ "sa": ["चित्रम्", "माध्यमम्", "सञ्चिका", "माध्यम", "चित्रं"],
560
+ "sah": ["Миэдьийэ", "Ойуу", "Билэ", "Изображение"],
561
+ "sat": ["ᱨᱮᱫ", "ᱢᱤᱰᱤᱭᱟ"],
562
+ "sc": ["Immàgini"],
563
+ "scn": ["Immagine", "Mmàggini", "Mèdia"],
564
+ "sd": ["عڪس", "ذريعات", "فائل"],
565
+ "se": ["Fiila"],
566
+ "sg": ["Média", "Fichier"],
567
+ "sh": ["Mediji", "Slika", "Медија", "Datoteka", "Medija", "Слика"],
568
+ "si": ["රූපය", "මාධ්‍යය", "ගොනුව"],
569
+ "sk": ["Súbor", "Obrázok", "Médiá"],
570
+ "sl": ["Slika", "Datoteka"],
571
+ "sq": ["Figura", "Skeda"],
572
+ "sr": ["Датотека", "Medij", "Slika", "Медија", "Datoteka", "Медиј", "Medija", "Слика"],
573
+ "srn": ["Afbeelding", "Gefre"],
574
+ "stq": ["Bielde", "Bild"],
575
+ "su": ["Média", "Gambar"],
576
+ "sv": ["Fil", "Bild"],
577
+ "sw": ["Faili", "Picha"],
578
+ "szl": ["Plik", "Grafika"],
579
+ "ta": ["படிமம்", "ஊடகம்"],
580
+ "tcy": ["ಮಾದ್ಯಮೊ", "ಫೈಲ್"],
581
+ "te": ["ఫైలు", "దస్త్రం", "బొమ్మ", "మీడియా"],
582
+ "tet": ["Imajen", "Arquivo", "Imagem"],
583
+ "tg": ["Акс", "Медиа"],
584
+ "th": ["ไฟล์", "สื่อ", "ภาพ"],
585
+ "ti": ["ፋይል", "ሜድያ"],
586
+ "tk": ["Faýl"],
587
+ "tl": ["Midya", "Talaksan"],
588
+ "tpi": ["Fail"],
589
+ "tr": ["Medya", "Resim", "Dosya", "Ortam"],
590
+ "tt": ["Медиа", "Рәсем", "Файл", "Räsem", "Изображение"],
591
+ "ty": ["Média", "Fichier"],
592
+ "tyv": ["Медиа", "Файл", "Изображение"],
593
+ "udm": ["Медиа", "Файл", "Суред", "Изображение"],
594
+ "ug": ["ۋاسىتە", "ھۆججەت"],
595
+ "uk": ["Медіа", "Медиа", "Файл", "Изображение", "Зображення"],
596
+ "ur": ["میڈیا", "تصویر", "وسیط", "زریعہ", "فائل", "ملف"],
597
+ "uz": ["Mediya", "Tasvir", "Fayl"],
598
+ "vec": ["Immagine", "Imàjine", "Mèdia"],
599
+ "vep": ["Pilt", "Fail"],
600
+ "vi": ["Phương_tiện", "Tập_tin", "Hình", "Tập tin", "Phương tiện"],
601
+ "vls": ["Afbeelding", "Ofbeeldienge"],
602
+ "vo": ["Ragiv", "Magod", "Nünamakanäd"],
603
+ "wa": ["Imådje"],
604
+ "war": ["Medya", "Fayl", "Paypay"],
605
+ "wo": ["Xibaarukaay", "Dencukaay"],
606
+ "wuu": ["文件", "档案", "图像", "媒体"],
607
+ "xal": ["Аһар", "Боомг", "Изображение", "Зург"],
608
+ "xmf": ["მედია", "სურათი", "ფაილი"],
609
+ "yi": ["מעדיע", "תמונה", "טעקע", "בילד"],
610
+ "yo": ["Fáìlì", "Amóhùnmáwòrán", "Àwòrán"],
611
+ "za": ["媒体文件", "文件", "档案", "图像", "媒体"],
612
+ "zea": ["Afbeelding", "Plaetje"],
613
+ "zh": ["媒体文件", "F", "文件", "媒體", "档案", "图像", "圖像", "媒体", "檔案"],
614
+ "zh-classical": ["文件", "媒體", "圖像", "檔案"],
615
+ "zh-min-nan": ["tóng-àn", "文件", "媒體", "Mûi-thé", "圖像", "檔案"],
616
+ "zh-yue": ["檔", "档", "文件", "图", "媒體", "圖", "档案", "图像", "圖像", "媒体", "檔案"],
617
+ }
618
+
619
+ # Source: for each Wikipedia language code (example shown for "ab"), aliases for namespace 14 accessed via this API call:
620
+ # https://ab.wikipedia.org/w/api.php?action=query&meta=siteinfo&siprop=namespacealiases|namespaces&format=json&formatversion=2 (accessed 12/21/2021)
621
+ CAT_ALIASES = {
622
+ "ab": ["Категория", "Акатегориа"],
623
+ "ace": ["Kawan", "Kategori"],
624
+ "af": ["Kategorie"],
625
+ "ak": ["Nkyekyem"],
626
+ "als": ["Kategorie"],
627
+ "am": ["መደብ"],
628
+ "an": ["Categoría"],
629
+ "ang": ["Flocc"],
630
+ "ar": ["تصنيف"],
631
+ "arc": ["ܣܕܪܐ"],
632
+ "arz": ["تصنيف"],
633
+ "as": ["CAT", "শ্ৰেণী", "श्रेणी", "শ্রেণী"],
634
+ "ast": ["Categoría"],
635
+ "atj": ["Tipanictawin"],
636
+ "av": ["Категория"],
637
+ "ay": ["Categoría"],
638
+ "az": ["Kateqoriya"],
639
+ "azb": ["بؤلمه"],
640
+ "ba": ["Төркөм", "Категория"],
641
+ "bar": ["Kategorie"],
642
+ "bat-smg": ["Kategorija", "Kateguorėjė"],
643
+ "bcl": ["Kategorya"],
644
+ "be": ["Катэгорыя"],
645
+ "be-x-old": ["Катэгорыя"],
646
+ "bg": ["Категория"],
647
+ "bh": ["श्रेणी"],
648
+ "bjn": ["Tumbung", "Kategori"],
649
+ "bm": ["Catégorie"],
650
+ "bn": ["বিষয়শ্রেণী", "വിഭാഗം"],
651
+ "bpy": ["থাক"],
652
+ "br": ["Rummad"],
653
+ "bs": ["Kategorija"],
654
+ "bug": ["Kategori"],
655
+ "bxr": ["Категори", "Категория"],
656
+ "ca": ["Categoria"],
657
+ "cbk-zam": ["Categoría"],
658
+ "cdo": ["分類"],
659
+ "ce": ["Категори", "Тоба", "Кадегар"],
660
+ "ceb": ["Kategoriya"],
661
+ "ch": ["Katigoria"],
662
+ "ckb": ["پ", "پۆل"],
663
+ "co": ["Categoria"],
664
+ "crh": ["Категория", "Kategoriya"],
665
+ "cs": ["Kategorie"],
666
+ "csb": ["Kategòrëjô"],
667
+ "cu": ["Катигорї", "Категория", "Катигорїꙗ"],
668
+ "cv": ["Категори"],
669
+ "cy": ["Categori"],
670
+ "da": ["Kategori"],
671
+ "de": ["Kategorie"],
672
+ "din": ["Bekätakthook"],
673
+ "diq": ["Kategoriye", "Kategori"],
674
+ "dsb": ["Kategorija"],
675
+ "dty": ["श्रेणी"],
676
+ "dv": ["ޤިސްމު"],
677
+ "el": ["Κατηγορία"],
678
+ "eml": ["Categoria"],
679
+ "eo": ["Kategorio"],
680
+ "es": ["CAT", "Categoría"],
681
+ "et": ["Kategooria"],
682
+ "eu": ["Kategoria"],
683
+ "ext": ["Categoría", "Categoria"],
684
+ "fa": ["رده"],
685
+ "ff": ["Catégorie"],
686
+ "fi": ["Luokka"],
687
+ "fiu-vro": ["Katõgooria"],
688
+ "fo": ["Bólkur"],
689
+ "fr": ["Catégorie"],
690
+ "frp": ["Catègorie"],
691
+ "frr": ["Kategorie"],
692
+ "fur": ["Categorie"],
693
+ "fy": ["Kategory"],
694
+ "ga": ["Rang", "Catagóir"],
695
+ "gag": ["Kategori", "Kategoriya"],
696
+ "gan": ["分類", "分类"],
697
+ "gd": ["Roinn-seòrsa"],
698
+ "gl": ["Categoría"],
699
+ "glk": ["جرگه", "رده"],
700
+ "gn": ["Ñemohenda"],
701
+ "gom": ["वर्ग", "श्रेणी"],
702
+ "gor": ["Dalala"],
703
+ "got": ["𐌷𐌰𐌽𐍃𐌰"],
704
+ "gu": ["શ્રેણી", "CAT", "શ્રે"],
705
+ "gv": ["Ronney"],
706
+ "hak": ["分類"],
707
+ "haw": ["Māhele"],
708
+ "he": ["קטגוריה", "קט"],
709
+ "hi": ["श्र", "श्रेणी"],
710
+ "hif": ["vibhag"],
711
+ "hr": ["CT", "KT", "Kategorija"],
712
+ "hsb": ["Kategorija"],
713
+ "ht": ["Kategori"],
714
+ "hu": ["Kategória"],
715
+ "hy": ["Կատեգորիա"],
716
+ "ia": ["Categoria"],
717
+ "id": ["Kategori"],
718
+ "ie": ["Categorie"],
719
+ "ig": ["Ébéonọr", "Òtù"],
720
+ "ii": ["分类"],
721
+ "ilo": ["Kategoria"],
722
+ "inh": ["ОагӀат"],
723
+ "io": ["Kategorio"],
724
+ "is": ["Flokkur"],
725
+ "it": ["CAT", "Categoria"],
726
+ "ja": ["カテゴリ"],
727
+ "jbo": ["klesi"],
728
+ "jv": ["Kategori"],
729
+ "ka": ["კატეგორია"],
730
+ "kaa": ["Sanat", "Kategoriya", "Санат", "سانات"],
731
+ "kab": ["Taggayt"],
732
+ "kbd": ["Категория", "Категориэ"],
733
+ "kbp": ["Catégorie"],
734
+ "kg": ["Kalasi"],
735
+ "kk": ["Sanat", "Санат", "سانات"],
736
+ "kl": ["Sumut_atassuseq", "Kategori", "Sumut atassuseq"],
737
+ "km": ["ចំនាត់ថ្នាក់ក្រុម", "ចំណាត់ក្រុម", "ចំណាត់ថ្នាក់ក្រុម"],
738
+ "kn": ["ವರ್ಗ"],
739
+ "ko": ["분류"],
740
+ "koi": ["Категория"],
741
+ "krc": ["Категория"],
742
+ "ks": ["زٲژ"],
743
+ "ksh": ["Saachjropp", "Saachjrop", "Katejori", "Kategorie", "Saachjrupp", "Kattejori", "Sachjrop"],
744
+ "ku": ["Kategorî", "پۆل"],
745
+ "kv": ["Категория"],
746
+ "kw": ["Class", "Klass"],
747
+ "ky": ["Категория"],
748
+ "la": ["Categoria"],
749
+ "lad": ["Kateggoría", "Katēggoría", "Categoría"],
750
+ "lb": ["Kategorie"],
751
+ "lbe": ["Категория"],
752
+ "lez": ["Категория"],
753
+ "lfn": ["Categoria"],
754
+ "li": ["Categorie", "Kategorie"],
755
+ "lij": ["Categorîa", "Categoria"],
756
+ "lmo": ["Categuria", "Categoria"],
757
+ "ln": ["Catégorie"],
758
+ "lo": ["ໝວດ"],
759
+ "lrc": ["دأسە"],
760
+ "lt": ["Kategorija"],
761
+ "ltg": ["Kategoreja"],
762
+ "lv": ["Kategorija"],
763
+ "mai": ["CA", "श्रेणी"],
764
+ "map-bms": ["Kategori"],
765
+ "mdf": ["Категорие", "Категория"],
766
+ "mg": ["Sokajy", "Catégorie"],
767
+ "mhr": ["Категория", "Категорий"],
768
+ "min": ["Kategori"],
769
+ "mk": ["Категорија"],
770
+ "ml": ["വിഭാഗം", "വി", "വർഗ്ഗം", "വ"],
771
+ "mn": ["Ангилал"],
772
+ "mr": ["वर्ग"],
773
+ "mrj": ["Категори", "Категория"],
774
+ "ms": ["Kategori"],
775
+ "mt": ["Kategorija"],
776
+ "mwl": ["Catadorie", "Categoria"],
777
+ "my": ["ကဏ္ဍ"],
778
+ "myv": ["Категория"],
779
+ "mzn": ["رج", "رده"],
780
+ "nah": ["Neneuhcāyōtl", "Categoría"],
781
+ "nap": ["Categurìa", "Categoria"],
782
+ "nds": ["Kategorie"],
783
+ "nds-nl": ["Categorie", "Kattegerie", "Kategorie"],
784
+ "ne": ["श्रेणी"],
785
+ "new": ["पुचः"],
786
+ "nl": ["Categorie"],
787
+ "nn": ["Kategori"],
788
+ "no": ["Kategori"],
789
+ "nrm": ["Catégorie"],
790
+ "nso": ["Setensele"],
791
+ "nv": ["Tʼááłáhági_átʼéego", "Tʼááłáhági átʼéego"],
792
+ "oc": ["Categoria"],
793
+ "olo": ["Kategourii"],
794
+ "or": ["ବିଭାଗ", "ଶ୍ରେଣୀ"],
795
+ "os": ["Категори"],
796
+ "pa": ["ਸ਼੍ਰੇਣੀ"],
797
+ "pcd": ["Catégorie"],
798
+ "pdc": ["Abdeeling", "Kategorie"],
799
+ "pfl": ["Kadegorie", "Sachgrubb", "Kategorie"],
800
+ "pi": ["विभाग"],
801
+ "pl": ["Kategoria"],
802
+ "pms": ["Categorìa"],
803
+ "pnb": ["گٹھ"],
804
+ "pnt": ["Κατηγορίαν"],
805
+ "ps": ["وېشنيزه"],
806
+ "pt": ["Categoria"],
807
+ "qu": ["Katiguriya"],
808
+ "rm": ["Categoria"],
809
+ "rmy": ["Shopni"],
810
+ "ro": ["Categorie"],
811
+ "roa-rup": ["Categorie"],
812
+ "roa-tara": ["Categoria"],
813
+ "ru": ["Категория", "К"],
814
+ "rue": ["Категория", "Катеґорія"],
815
+ "rw": ["Ikiciro"],
816
+ "sa": ["वर्गः"],
817
+ "sah": ["Категория"],
818
+ "sat": ["ᱛᱷᱚᱠ"],
819
+ "sc": ["Categoria"],
820
+ "scn": ["Catigurìa"],
821
+ "sd": ["زمرو"],
822
+ "se": ["Kategoriija"],
823
+ "sg": ["Catégorie"],
824
+ "sh": ["Kategorija", "Категорија"],
825
+ "si": ["ප්‍රවර්ගය"],
826
+ "sk": ["Kategória"],
827
+ "sl": ["Kategorija"],
828
+ "sq": ["Kategoria", "Kategori"],
829
+ "sr": ["Kategorija", "Категорија"],
830
+ "srn": ["Categorie", "Guru"],
831
+ "stq": ["Kategorie"],
832
+ "su": ["Kategori"],
833
+ "sv": ["Kategori"],
834
+ "sw": ["Jamii"],
835
+ "szl": ["Kategoryjo", "Kategoria"],
836
+ "ta": ["பகுப்பு"],
837
+ "tcy": ["ವರ್ಗೊ"],
838
+ "te": ["వర్గం"],
839
+ "tet": ["Kategoría", "Kategoria"],
840
+ "tg": ["Гурӯҳ"],
841
+ "th": ["หมวดหมู่"],
842
+ "ti": ["መደብ"],
843
+ "tk": ["Kategoriýa"],
844
+ "tl": ["Kategorya", "Kaurian"],
845
+ "tpi": ["Grup"],
846
+ "tr": ["Kategori", "KAT"],
847
+ "tt": ["Төркем", "Törkem", "Категория"],
848
+ "ty": ["Catégorie"],
849
+ "tyv": ["Аңгылал", "Категория"],
850
+ "udm": ["Категория"],
851
+ "ug": ["تۈر"],
852
+ "uk": ["Категория", "Категорія"],
853
+ "ur": ["زمرہ"],
854
+ "uz": ["Turkum", "Kategoriya"],
855
+ "vec": ["Categoria"],
856
+ "vep": ["Kategorii"],
857
+ "vi": ["Thể_loại", "Thể loại"],
858
+ "vls": ["Categorie"],
859
+ "vo": ["Klad"],
860
+ "wa": ["Categoreye"],
861
+ "war": ["Kaarangay"],
862
+ "wo": ["Wàll", "Catégorie"],
863
+ "wuu": ["分类"],
864
+ "xal": ["Янз", "Әәшл"],
865
+ "xmf": ["კატეგორია"],
866
+ "yi": ["קאטעגאריע", "קאַטעגאָריע"],
867
+ "yo": ["Ẹ̀ka"],
868
+ "za": ["分类"],
869
+ "zea": ["Categorie"],
870
+ "zh": ["分类", "分類", "CAT"],
871
+ "zh-classical": ["分類", "CAT"],
872
+ "zh-min-nan": ["分類", "Lūi-pia̍t"],
873
+ "zh-yue": ["分类", "分類", "类", "類"],
874
+ }
875
+
876
+ _BASE_URL_TMPL = "https://dumps.wikimedia.org/{lang}wiki/{date}/"
877
+ _INFO_FILE = "dumpstatus.json"
878
+
879
+
880
+ _VERSION = datasets.Version("2.0.0", "")
881
+ _GiB_SIZE_IDENTIFIER = 1.074e+9
882
+
883
+ class WikipediaConfig(datasets.BuilderConfig):
884
+ """BuilderConfig for Wikipedia."""
885
+
886
+ def __init__(self, language=None, date=None, version=_VERSION,
887
+ split_size:int=_GiB_SIZE_IDENTIFIER, subset_file_to_process:str=":",
888
+ force_rerun_split: bool=False, **kwargs):
889
+ """BuilderConfig for Wikipedia.
890
+
891
+ Args:
892
+ language: string, the language code for the Wikipedia dump to use.
893
+ date: string, date of the Wikipedia dump in YYYYMMDD format. A list of
894
+ available dates can be found at https://dumps.wikimedia.org/enwiki/.
895
+ **kwargs: keyword arguments forwarded to super.
896
+ """
897
+ super().__init__(
898
+ name=f"{date}.{language}",
899
+ description=f"Wikipedia dataset for {language}, parsed from {date} dump.",
900
+ version=version,
901
+ **kwargs,
902
+ )
903
+ self.date = date
904
+ self.language = language
905
+ self.split_size = split_size
906
+ self.force_rerun_split = force_rerun_split
907
+
908
+ _subsets = str(subset_file_to_process).split(":")
909
+ if len(_subsets) > 2:
910
+ raise ValueError(f"Unexpected format of `subset_file_to_process`! Received {subset_file_to_process}!")
911
+ elif len(_subsets) == 1:
912
+ if _subsets[0] == '':
913
+ #take all of the values
914
+ self.start_file_split, self.end_file_split = None, None
915
+ else:
916
+ #force cast into int, will throw an error it it's not
917
+ _subset_no = int(_subsets[0])
918
+ self.start_file_split, self.end_file_split = _subset_no, _subset_no+1
919
+ else:
920
+ if list(_subsets) == ['', ''] :
921
+ #take all of the values
922
+ self.start_file_split, self.end_file_split = None, None
923
+ else:
924
+ self.start_file_split, self.end_file_split = int(_subsets[0]), int(_subsets[1])+1
925
+
926
+ _DATE = "20220301"
927
+
928
+
929
+ class Wikipedia(datasets.BeamBasedBuilder):
930
+ """Wikipedia dataset."""
931
+
932
+ # Use mirror (your.org) to avoid download caps.
933
+ BUILDER_CONFIG_CLASS = WikipediaConfig
934
+ BUILDER_CONFIGS = [
935
+ WikipediaConfig(
936
+ language=lang,
937
+ date=_DATE,
938
+ ) # pylint:disable=g-complex-comprehension
939
+ for lang in WIKIPEDIA_LANGUAGES
940
+ ]
941
+
942
+ def _info(self):
943
+ return datasets.DatasetInfo(
944
+ description=_DESCRIPTION,
945
+ features=datasets.Features(
946
+ {
947
+ "id": datasets.Value("string"),
948
+ "url": datasets.Value("string"),
949
+ "title": datasets.Value("string"),
950
+ "text": datasets.Value("string"),
951
+ }
952
+ ),
953
+ # No default supervised_keys.
954
+ supervised_keys=None,
955
+ homepage="https://dumps.wikimedia.org",
956
+ citation=_CITATION,
957
+ )
958
+
959
+ def _split_generators(self, dl_manager, pipeline):
960
+ def _base_url(lang):
961
+ return _BASE_URL_TMPL.format(lang=lang.replace("-", "_"), date=self.config.date)
962
+
963
+ lang = self.config.language
964
+
965
+ info_url = _base_url(lang) + _INFO_FILE
966
+ # Use dictionary since testing mock always returns the same result.
967
+ downloaded_files = dl_manager.download_and_extract({"info": info_url})
968
+
969
+ xml_urls, is_split_xml = [], []
970
+ total_bytes = 0
971
+ with open(downloaded_files["info"], encoding="utf-8") as f:
972
+ dump_info = json.load(f)
973
+ multistream_dump_info = dump_info["jobs"]["articlesmultistreamdump"]
974
+ assert (
975
+ multistream_dump_info["status"] == "done"
976
+ ), "Specified dump (%s) multistream status is not 'done': %s" % (
977
+ _base_url(lang),
978
+ multistream_dump_info["status"],
979
+ )
980
+
981
+ for fname, info in multistream_dump_info["files"].items():
982
+ if ".xml" not in fname:
983
+ continue
984
+ if info["size"] > self.config.split_size:
985
+ is_split_xml.append(True)
986
+ else:
987
+ is_split_xml.append(False)
988
+ total_bytes += info["size"]
989
+ xml_urls.append(_base_url(lang) + fname)
990
+
991
+ # Use dictionary since testing mock always returns the same result.
992
+ downloaded_files = dl_manager.download({"xml": xml_urls})
993
+
994
+ logger.info("found %s file(s) needs to be splitted", str(sum(is_split_xml)))
995
+ downloaded_files = split_bz2_files(downloaded_files, is_split_xml, self.config.split_size, self.config.force_rerun_split)
996
+
997
+ # filter downloaded paths based on start-end file splits
998
+ if self.config.start_file_split is not None and self.config.end_file_split is not None:
999
+ _new_files = downloaded_files["xml"][self.config.start_file_split:self.config.end_file_split]
1000
+ if len(_new_files) == 0:
1001
+ raise ValueError("The config of file splits resulting in zero file to be processed!")
1002
+ downloaded_files["xml"] = _new_files
1003
+
1004
+ if not pipeline.is_local():
1005
+ downloaded_files = dl_manager.ship_files_with_pipeline(downloaded_files, pipeline)
1006
+
1007
+ return [
1008
+ datasets.SplitGenerator( # pylint:disable=g-complex-comprehension
1009
+ name=datasets.Split.TRAIN, gen_kwargs={"filepaths": downloaded_files["xml"], "language": lang}
1010
+ )
1011
+ ]
1012
+
1013
+ def _build_pcollection(self, pipeline, filepaths, language):
1014
+ """Build PCollection of examples in the raw (text) form."""
1015
+ import apache_beam as beam
1016
+ import mwparserfromhell
1017
+
1018
+ def _extract_content(filepath):
1019
+ """Extracts article content from a single WikiMedia XML file."""
1020
+ logger.info("generating examples from = %s", filepath)
1021
+ with beam.io.filesystems.FileSystems.open(filepath) as f:
1022
+ f = bz2.BZ2File(filename=f)
1023
+ # Workaround due to: https://github.com/tensorflow/tensorflow/issues/33563
1024
+ utf_f = codecs.getreader("utf-8")(f)
1025
+ context = etree.iterparse(utf_f, events=("end",))
1026
+ for _unused_event, elem in context:
1027
+ if not elem.tag.endswith("page"):
1028
+ continue
1029
+ namespace = elem.tag[:-4]
1030
+ title = elem.find(f"./{namespace}title").text
1031
+ ns = elem.find(f"./{namespace}ns").text
1032
+ id_ = elem.find(f"./{namespace}id").text
1033
+ red_ = elem.find(f"./{namespace}redirect")
1034
+
1035
+ # Filter pages that are not in the "main" namespace.
1036
+ if ns != "0":
1037
+ elem.clear()
1038
+ continue
1039
+
1040
+ raw_content = elem.find(f"./{namespace}revision/{namespace}text").text
1041
+ elem.clear()
1042
+
1043
+ # Filter redirects.
1044
+ if raw_content is None or red_ is not None:
1045
+ beam.metrics.Metrics.counter(language, "filtered-redirects").inc()
1046
+ continue
1047
+
1048
+ beam.metrics.Metrics.counter(language, "extracted-examples").inc()
1049
+ yield (id_, title, raw_content)
1050
+
1051
+ def _clean_content(inputs, language):
1052
+ """Cleans raw wikicode to extract text."""
1053
+ id_, title, raw_content = inputs
1054
+ try:
1055
+ text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell, language=language)
1056
+ except (mwparserfromhell.parser.ParserError) as e:
1057
+ beam.metrics.Metrics.counter(language, "parser-error").inc()
1058
+ logger.error("mwparserfromhell ParseError: %s", e)
1059
+ return
1060
+
1061
+ if not text:
1062
+ beam.metrics.Metrics.counter(language, "empty-clean-examples").inc()
1063
+ return
1064
+
1065
+ url = _construct_url(title, language)
1066
+
1067
+ beam.metrics.Metrics.counter(language, "cleaned-examples").inc()
1068
+
1069
+ yield id_, {"id": id_, "url": url, "title": title, "text": text}
1070
+
1071
+ return (
1072
+ pipeline
1073
+ | "Initialize" >> beam.Create(filepaths)
1074
+ | "Extract content" >> beam.FlatMap(_extract_content)
1075
+ | "Distribute" >> beam.transforms.Reshuffle()
1076
+ | "Clean content" >> beam.FlatMap(_clean_content, language=language)
1077
+ )
1078
+
1079
+
1080
+ def split_bz2_files(downloaded_files_dict:dict, is_split_xml_identifier:bool,
1081
+ desired_uncompressed_filesize_per_split:int, force_rerun: bool=False):
1082
+ assert len(downloaded_files_dict.keys())==1, "Unexpected format of arg `downloaded_files_dict`!"
1083
+
1084
+ dict_key = list(downloaded_files_dict.keys())[0]
1085
+ _dict_value = list(downloaded_files_dict.values())[0]
1086
+
1087
+ assert isinstance(_dict_value, list), f"Expected dict value has type of `list`! Received `{type(_dict_value)}!"
1088
+
1089
+ def _create_chunk_file(filename: str, filecount: int):
1090
+ _file_name = f"{filename if not filename.endswith('.xml.bz2') else filename[:-8]}_{str(filecount)}"
1091
+ return bz2.BZ2File(_file_name, 'w'), _file_name
1092
+
1093
+ def _close_and_add_closing_tag(file, is_expected_to_be_opened=True):
1094
+ if not file.closed:
1095
+ if not is_expected_to_be_opened:
1096
+ logger.debug("File isn't closed yet! Closing...")
1097
+ else:
1098
+ # file is expected to close, but since this indicates not the end of splitted files,
1099
+ # add this tag instead
1100
+ file.write(b'</mediawiki>')
1101
+ file.close()
1102
+
1103
+ def _preempt_file_with_data(file, list_of_data):
1104
+ for line in list_of_data:
1105
+ file.write(line)
1106
+
1107
+ # inspired from this: https://stackoverflow.com/a/6411933
1108
+ def _split_data_to_smaller_sizes(filename: str):
1109
+ """Extract and split bz2 filenames into smaller ones."""
1110
+
1111
+ # Define filename splitted list
1112
+ split_filename = []
1113
+
1114
+ #Define FIle Header Constructor Var
1115
+ header_list, keep_appending_for_header = [], True
1116
+
1117
+ # Define Counters
1118
+ filecount, line_cnt, text_data_size = 1, 0, 0
1119
+
1120
+ # Open Source bz2 for splitting
1121
+ logger.info("Reading bz2 file %s for splitting", filename)
1122
+ bzfile = bz2.BZ2File(filename)
1123
+
1124
+ #open splitted chunkfile in write mode
1125
+ chunk_file, chunk_file_name = _create_chunk_file(filename, filecount)
1126
+ logger.info("Creating new splitted filename %s", chunk_file_name)
1127
+
1128
+ for line in bzfile:
1129
+ chunk_file.write(line)
1130
+
1131
+ # generate headers that is usable for next files, since in wikidump, all split files have same format in first few lines
1132
+ if keep_appending_for_header:
1133
+ header_list.append(line)
1134
+
1135
+ #deactivate when siteinfo is found, meaning end of xml header
1136
+ if b'</siteinfo>' in line:
1137
+ keep_appending_for_header = False
1138
+
1139
+ # Update Counters
1140
+ line_cnt += 1
1141
+ text_data_size += sys.getsizeof(line)
1142
+
1143
+ # the </page> determines new wiki page
1144
+ if b'</page>\n' in line and text_data_size > desired_uncompressed_filesize_per_split:
1145
+ _close_and_add_closing_tag(chunk_file)
1146
+
1147
+ # log status and info for every successful split process
1148
+ logger.debug("total new data %s has reached the threshold of %s after %d line(s)", str(text_data_size), str(desired_uncompressed_filesize_per_split), line_cnt)
1149
+ split_filename.append(chunk_file_name)
1150
+
1151
+ # reset text_data_size and do an increment filename
1152
+ text_data_size = 0
1153
+ line_cnt = 0
1154
+ filecount += 1
1155
+
1156
+ logger.info("creating new splitted filename %s", chunk_file_name)
1157
+ chunk_file, chunk_file_name = _create_chunk_file(filename, filecount)
1158
+ _preempt_file_with_data(chunk_file, header_list)
1159
+
1160
+ #check if the file isn't closed yet
1161
+ _close_and_add_closing_tag(chunk_file, is_expected_to_be_opened=False)
1162
+
1163
+ return split_filename
1164
+
1165
+ new_filename_collection = []
1166
+ for iter_idx, _filename in enumerate(_dict_value):
1167
+ if is_split_xml_identifier[iter_idx]:
1168
+ _full_path_dir = _filename.split("/")
1169
+ _folder_name = "/".join(_full_path_dir[:-1])
1170
+ _init_filename = _full_path_dir[-1]
1171
+ detected_split_filenames = sorted([os.path.join(_folder_name, file) for file in os.listdir(_folder_name) if bool(re.match(f"{_init_filename}_\d+", file))])
1172
+
1173
+ if force_rerun or len(detected_split_filenames)==0:
1174
+ if force_rerun:
1175
+ logger.debug("force rerun true")
1176
+ elif len(detected_split_filenames)==0:
1177
+ logger.debug("no detected split names")
1178
+ logger.info("splitting file %s", _filename)
1179
+ splitted_filenames = _split_data_to_smaller_sizes(_filename)
1180
+ logger.info("file has been splitted to %s", len(splitted_filenames))
1181
+ new_filename_collection.extend(splitted_filenames)
1182
+ else:
1183
+ logger.info("existing file(s) found with total files of %d", len(detected_split_filenames))
1184
+ new_filename_collection.extend(detected_split_filenames)
1185
+
1186
+ else:
1187
+ new_filename_collection.append(_filename)
1188
+
1189
+ return {dict_key: new_filename_collection}
1190
+
1191
+
1192
+ def _parse_and_clean_wikicode(raw_content, parser, language):
1193
+ """Strips formatting and unwanted sections from raw page content."""
1194
+ wikicode = parser.parse(raw_content)
1195
+
1196
+ # Filters for magic words that are parser instructions -- e.g., __NOTOC__
1197
+ re_rm_magic = re.compile("__[A-Z]*__", flags=re.UNICODE)
1198
+
1199
+ # Filters for file/image links.
1200
+ media_prefixes = "|".join(["File", "Image", "Media"] + MEDIA_ALIASES.get(language, []))
1201
+ re_rm_wikilink = re.compile(f"^(?:{media_prefixes}):", flags=re.IGNORECASE | re.UNICODE)
1202
+
1203
+ def rm_wikilink(obj):
1204
+ return bool(re_rm_wikilink.match(str(obj.title)))
1205
+
1206
+ # Filters for references and tables
1207
+ def rm_tag(obj):
1208
+ return str(obj.tag) in {"ref", "table"}
1209
+
1210
+ # Leave category links in-place but remove the category prefixes
1211
+ cat_prefixes = "|".join(["Category"] + CAT_ALIASES.get(language, []))
1212
+ re_clean_wikilink = re.compile(f"^(?:{cat_prefixes}):", flags=re.IGNORECASE | re.UNICODE)
1213
+
1214
+ def is_category(obj):
1215
+ return bool(re_clean_wikilink.match(str(obj.title)))
1216
+
1217
+ def clean_wikilink(obj):
1218
+ text = obj.__strip__()
1219
+ text = re.sub(re_clean_wikilink, "", text)
1220
+ obj.text = text
1221
+
1222
+ def try_replace_obj(obj):
1223
+ try:
1224
+ clean_wikilink(obj)
1225
+ except ValueError:
1226
+ # For unknown reasons, objects are sometimes not found.
1227
+ pass
1228
+
1229
+ def try_remove_obj(obj, section):
1230
+ try:
1231
+ section.remove(obj)
1232
+ except ValueError:
1233
+ # For unknown reasons, objects are sometimes not found.
1234
+ pass
1235
+
1236
+ section_text = []
1237
+ # Filter individual sections to clean.
1238
+ for section in wikicode.get_sections(flat=True, include_lead=True, include_headings=True):
1239
+ for obj in section.ifilter_wikilinks(recursive=True):
1240
+ if rm_wikilink(obj):
1241
+ try_remove_obj(obj, section)
1242
+ elif is_category(obj):
1243
+ try_replace_obj(obj)
1244
+ for obj in section.ifilter_tags(matches=rm_tag, recursive=True):
1245
+ try_remove_obj(obj, section)
1246
+
1247
+ section_text.append(re.sub(re_rm_magic, "", section.strip_code().strip()))
1248
+ return "\n\n".join(section_text)
1249
+
1250
+
1251
+ def _construct_url(title, language):
1252
+ # See: https://meta.wikimedia.org/wiki/Help:URL
1253
+ return f"https://{language}.wikipedia.org/wiki/{quote(title)}"
sea_wiki.py CHANGED
@@ -53,7 +53,7 @@ _COUNTRY_TO_LANG_MAPPER = {
53
  "mmr": ["my", "shn", "mnw"],
54
  "mys": ["ms", "ta"],
55
  #"ceb" lang is available, but not supported yet due to size
56
- "phl": ["war", "tl", "ilo", "bcl", "pam", "cbk-zam", "pag"],
57
  "sgp": ["ms", "ta"],
58
  "tha": ["th", "mnw", "shn"],
59
  "tls": ["tet"],
 
53
  "mmr": ["my", "shn", "mnw"],
54
  "mys": ["ms", "ta"],
55
  #"ceb" lang is available, but not supported yet due to size
56
+ "phl": ["war", "tl", "ilo", "bcl", "pam", "cbk-zam", "pag", "ceb"],
57
  "sgp": ["ms", "ta"],
58
  "tha": ["th", "mnw", "shn"],
59
  "tls": ["tet"],
sea_wiki_dedup_data/wiki_ceb_20231101_dataset_dedup_cleansed.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23c5d20ac09b5777b50062b6113eb6ab3cf51d03f68a506319913ce56b4f338e
3
+ size 4354355472
sea_wiki_raw_data/wiki_ceb_20231101_raw_dataset.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:860331e94fc7a4540f68dc2d152642b07acdeb040fb6e4a86100ca597f1e6ae2
3
+ size 4581734087