sabilmakbar commited on
Commit
7d396d1
1 Parent(s): 15a95e7

Syncing the docs and the codes

Browse files
README.md CHANGED
@@ -33,9 +33,14 @@ task_ids:
33
  pretty_name: Wikipedia Archive for Indonesian Languages & Local Languages
34
  tags:
35
  - Wikipedia
36
- - Untagged Languages ISO-639 (Banyumase/Ngapak)
 
 
 
 
 
37
  - Indonesian Language
38
- - Malaysian Language
39
  - Indonesia-related Languages
40
  - Indonesian Local Languages
41
  dataset_info:
@@ -162,8 +167,27 @@ license: cc-by-sa-3.0
162
  Welcome to Indonesian Wikipedia Data Repository. The datasets are extracted from [Wikipedia HF](https://huggingface.co/datasets/wikipedia) and processed using the scripts available in this repository for reproducibility purpose.
163
 
164
  # **FAQS**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
165
  ### How do I extract new Wikipedia Dataset of Indonesian languages?
166
- You may check to the script [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/indonesian_wiki/blob/main/extract_raw_wiki_data.py) to understand its implementations, or you can adjust the bash provided in [_```extract_raw_wiki_data_indo.sh```_](https://huggingface.co/datasets/sabilmakbar/indonesian_wiki/blob/main/extract_raw_wiki_data_indo.sh) to extract it on your own.
167
 
168
  ### How do I extract new Wikipedia Dataset of Indonesian languages?
169
  You may visit this [Wikipedia Dump Index](https://dumps.wikimedia.org/backup-index.html) to check any latest available data and this link [Wikipedia Language Coverage](https://meta.wikimedia.org/wiki/List_of_Wikipedias#All_Wikipedias_ordered_by_number_of_articles) to map into any languages that you're wanting to extract.
@@ -171,7 +195,7 @@ You may visit this [Wikipedia Dump Index](https://dumps.wikimedia.org/backup-ind
171
  ### How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?
172
  The data available in here are processed with following flows:
173
  1. Raw data is being deduplicated on ```title``` and ```text``` (text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for no-available informations or asking for contributions of content in that article), which usually deemed noisy for NLP data.
174
- 2. Furthermore, the ```title``` and ```text``` data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars validated). You may check this [ ```cleanse_wiki_data.py```](https://huggingface.co/datasets/sabilmakbar/indonesian_wiki/blob/main/cleanse_wiki_data.py) script to understand its implementation.
175
 
176
  # Getting Started #
177
  ### To read the datasets directly ###
@@ -179,14 +203,14 @@ Use one of the following code chunks to load it from HuggingFace Hub:
179
  You can refer to the 2nd args of ```config name``` using the following script
180
  ```
181
  dataset = load_dataset(
182
- "sabilmakbar/indonesian_wiki",
183
  "indo_wiki_dedup_data" # a config name, can be "indo_wiki_raw_data" or "indowiki_dedup_id_only", defaults to "indo_wiki_dedup_data"
184
  )
185
  ```
186
  Or you can provide both ```lang``` and ```date_stamp``` (providing only one will thrown an error)
187
  ```
188
  dataset = load_dataset(
189
- "sabilmakbar/indonesian_wiki",
190
  lang = "id", # see the splits for complete lang choices
191
  date_stamp="20230901"
192
  )
 
33
  pretty_name: Wikipedia Archive for Indonesian Languages & Local Languages
34
  tags:
35
  - Wikipedia
36
+ - Indonesian
37
+ - Sundanese
38
+ - Javanese
39
+ - Malay
40
+ - Dialect
41
+ - Javanese Dialect (Banyumase/Ngapak)
42
  - Indonesian Language
43
+ - Malay Language
44
  - Indonesia-related Languages
45
  - Indonesian Local Languages
46
  dataset_info:
 
167
  Welcome to Indonesian Wikipedia Data Repository. The datasets are extracted from [Wikipedia HF](https://huggingface.co/datasets/wikipedia) and processed using the scripts available in this repository for reproducibility purpose.
168
 
169
  # **FAQS**
170
+ ### What are the available languages provided in dataset?
171
+ Please check the following table.
172
+ |Lang Code|Lang Desc|Wiki Info|Total Data|Total Size (bytes)|
173
+ |:---|:----:|:---:|
174
+ |ace|Acehnese|[Wiki Link](https://en.wikipedia.org/wiki/Acehnese_language)|12904|4867838|
175
+ |ban|Balinese|[Wiki Link](https://en.wikipedia.org/wiki/Balinese_language)|19837|17366080|
176
+ |bjn|Acehnese|[Wiki Link](https://en.wikipedia.org/wiki/Banjarese_language)|10437|6655378|
177
+ |bug|Buginese|[Wiki Link](https://en.wikipedia.org/wiki/Buginese_language)|9793|2072609|
178
+ |gor|Gorontalo|[Wiki Link](https://en.wikipedia.org/wiki/Gorontalo_language)|14514|5989252|
179
+ |id|Indonesian|[Wiki Link](https://en.wikipedia.org/wiki/Indonesian_language)|654287|1100932403|
180
+ |jv|Javanese|[Wiki Link](https://en.wikipedia.org/wiki/Javanese_language)|72667|69774853|
181
+ |map_bms|Banyumasan - Dialect of Javanese|[Wiki Link](https://en.wikipedia.org/wiki/Banyumasan_dialect)|11832|5060989|
182
+ |min|Minangkabau|[Wiki Link](https://en.wikipedia.org/wiki/Minangkabau_language)|225858|116376870|
183
+ |ms|Malay|[Wiki Link](https://en.wikipedia.org/wiki/Malay_language)|346186|410443550|
184
+ |nia|Nias|[Wiki Link](https://en.wikipedia.org/wiki/Nias_language)|1650|1938121|
185
+ |su|Sundanese|[Wiki Link](https://en.wikipedia.org/wiki/Sundanese_language)|61494|47410439|
186
+ |tet|Tetum|[Wiki Link](https://en.wikipedia.org/wiki/Tetum_language)|1465|1452716|
187
+
188
+
189
  ### How do I extract new Wikipedia Dataset of Indonesian languages?
190
+ You may check to the script [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/indo_wiki/blob/main/extract_raw_wiki_data.py) to understand its implementations, or you can adjust the bash provided in [_```extract_raw_wiki_data_indo.sh```_](https://huggingface.co/datasets/sabilmakbar/indo_wiki/blob/main/extract_raw_wiki_data_indo.sh) to extract it on your own. Please note that this dataset is extensible to any languages of your choice.
191
 
192
  ### How do I extract new Wikipedia Dataset of Indonesian languages?
193
  You may visit this [Wikipedia Dump Index](https://dumps.wikimedia.org/backup-index.html) to check any latest available data and this link [Wikipedia Language Coverage](https://meta.wikimedia.org/wiki/List_of_Wikipedias#All_Wikipedias_ordered_by_number_of_articles) to map into any languages that you're wanting to extract.
 
195
  ### How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?
196
  The data available in here are processed with following flows:
197
  1. Raw data is being deduplicated on ```title``` and ```text``` (text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for no-available informations or asking for contributions of content in that article), which usually deemed noisy for NLP data.
198
+ 2. Furthermore, the ```title``` and ```text``` data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars validated). You may check this [ ```cleanse_wiki_data.py```](https://huggingface.co/datasets/sabilmakbar/indo_wiki/blob/main/cleanse_wiki_data.py) script to understand its implementation.
199
 
200
  # Getting Started #
201
  ### To read the datasets directly ###
 
203
  You can refer to the 2nd args of ```config name``` using the following script
204
  ```
205
  dataset = load_dataset(
206
+ "sabilmakbar/indo_wiki",
207
  "indo_wiki_dedup_data" # a config name, can be "indo_wiki_raw_data" or "indowiki_dedup_id_only", defaults to "indo_wiki_dedup_data"
208
  )
209
  ```
210
  Or you can provide both ```lang``` and ```date_stamp``` (providing only one will thrown an error)
211
  ```
212
  dataset = load_dataset(
213
+ "sabilmakbar/indo_wiki",
214
  lang = "id", # see the splits for complete lang choices
215
  date_stamp="20230901"
216
  )
dedup_raw_wiki_data_indo.sh CHANGED
@@ -4,8 +4,8 @@
4
  # "ace", "ban", "bjn", "bug", "gor", "id", "jv", "map-bms", "min", "ms", "nia", "su", "tet"
5
 
6
  #params of executions
7
- folder_dir_to_save=./wiki_indonesian_dedup
8
- input_folder_to_be_dedup=./wiki_indonesian_raw
9
 
10
  drop_hard_dupl=True
11
  drop_soft_dupl=True
 
4
  # "ace", "ban", "bjn", "bug", "gor", "id", "jv", "map-bms", "min", "ms", "nia", "su", "tet"
5
 
6
  #params of executions
7
+ folder_dir_to_save=./indo_wiki_dedup
8
+ input_folder_to_be_dedup=./indo_wiki_raw
9
 
10
  drop_hard_dupl=True
11
  drop_soft_dupl=True
extract_raw_wiki_data_indo.sh CHANGED
@@ -5,7 +5,7 @@
5
 
6
  #params of executions
7
  date_ver=20230901
8
- folder_dir_to_save=./wiki_indonesian_raw
9
  lang_list=(ace ban bjn)
10
  # bug gor id jv map-bms min ms nia su tet)
11
 
 
5
 
6
  #params of executions
7
  date_ver=20230901
8
+ folder_dir_to_save=./indo_wiki_raw
9
  lang_list=(ace ban bjn)
10
  # bug gor id jv map-bms min ms nia su tet)
11
 
indo_wiki.py CHANGED
@@ -18,7 +18,7 @@ _CITATIONS = """\
18
  title = "Huggingface Wikipedia Dataset",
19
  url = "https://huggingface.co/datasets/wikipedia"}"""
20
 
21
- _REPO_URL = "https://huggingface.co/datasets/sabilmakbar/indonesian_wiki"
22
 
23
  _LICENSE = (
24
  "This work is licensed under the Creative Commons Attribution-ShareAlike "
 
18
  title = "Huggingface Wikipedia Dataset",
19
  url = "https://huggingface.co/datasets/wikipedia"}"""
20
 
21
+ _REPO_URL = "https://huggingface.co/datasets/sabilmakbar/indo_wiki"
22
 
23
  _LICENSE = (
24
  "This work is licensed under the Creative Commons Attribution-ShareAlike "