holylovenia commited on
Commit
a7c56c7
1 Parent(s): 729a989

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +54 -19
README.md CHANGED
@@ -1,7 +1,6 @@
 
1
  ---
2
- tags:
3
- - machine-translation
4
- language:
5
  - ind
6
  - btk
7
  - bew
@@ -13,22 +12,60 @@ language:
13
  - mui
14
  - rej
15
  - sun
 
 
 
 
 
16
  ---
17
 
18
- # nusatranslation_mt
 
 
19
 
20
- Democratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.
21
 
22
- We introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).
23
 
24
- For the rhetoric mode classification task, we cover 5 rhetoric modes, i.e., narrative, persuasive, argumentative, descriptive, and expository.
 
 
25
 
 
 
26
  ## Dataset Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
- Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
 
 
 
 
 
 
 
 
29
 
30
  ## Citation
31
 
 
32
  ```
33
  @unpublished{anonymous2023nusawrites:,
34
  title={NusaWrites: Constructing High-Quality Corpora for Underrepresented and Extremely Low-Resource Languages},
@@ -37,16 +74,14 @@ Run `pip install nusacrowd` before loading the dataset through HuggingFace's `lo
37
  year={2023},
38
  note={anonymous preprint under review}
39
  }
40
- ```
41
-
42
- ## License
43
-
44
- Creative Commons Attribution Share-Alike 4.0 International
45
-
46
- ## Homepage
47
-
48
- [https://github.com/IndoNLP/nusatranslation/tree/main/datasets/mt](https://github.com/IndoNLP/nusatranslation/tree/main/datasets/mt)
49
 
50
- ### NusaCatalogue
 
 
 
 
 
 
51
 
52
- For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
 
1
+
2
  ---
3
+ language:
 
 
4
  - ind
5
  - btk
6
  - bew
 
12
  - mui
13
  - rej
14
  - sun
15
+ pretty_name: Nusatranslation Mt
16
+ task_categories:
17
+ - machine-translation
18
+ tags:
19
+ - machine-translation
20
  ---
21
 
22
+ Democratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.
23
+ We introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).
24
+ For the rhetoric mode classification task, we cover 5 rhetoric modes, i.e., narrative, persuasive, argumentative, descriptive, and expository.
25
 
 
26
 
27
+ ## Languages
28
 
29
+ ind, btk, bew, bug, jav, mad, mak, min, mui, rej, sun
30
+
31
+ ## Supported Tasks
32
 
33
+ Machine Translation
34
+
35
  ## Dataset Usage
36
+ ### Using `datasets` library
37
+ ```
38
+ from datasets import load_dataset
39
+ dset = datasets.load_dataset("SEACrowd/nusatranslation_mt", trust_remote_code=True)
40
+ ```
41
+ ### Using `seacrowd` library
42
+ ```import seacrowd as sc
43
+ # Load the dataset using the default config
44
+ dset = sc.load_dataset("nusatranslation_mt", schema="seacrowd")
45
+ # Check all available subsets (config names) of the dataset
46
+ print(sc.available_config_names("nusatranslation_mt"))
47
+ # Load the dataset using a specific config
48
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
49
+ ```
50
+
51
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
52
+
53
+
54
+ ## Dataset Homepage
55
 
56
+ [https://github.com/IndoNLP/nusatranslation/tree/main/datasets/mt](https://github.com/IndoNLP/nusatranslation/tree/main/datasets/mt)
57
+
58
+ ## Dataset Version
59
+
60
+ Source: 1.0.0. SEACrowd: 2024.06.20.
61
+
62
+ ## Dataset License
63
+
64
+ Creative Commons Attribution Share-Alike 4.0 International
65
 
66
  ## Citation
67
 
68
+ If you are using the **Nusatranslation Mt** dataloader in your work, please cite the following:
69
  ```
70
  @unpublished{anonymous2023nusawrites:,
71
  title={NusaWrites: Constructing High-Quality Corpora for Underrepresented and Extremely Low-Resource Languages},
 
74
  year={2023},
75
  note={anonymous preprint under review}
76
  }
77
+
 
 
 
 
 
 
 
 
78
 
79
+ @article{lovenia2024seacrowd,
80
+ title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages},
81
+ author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng-Xin Yong and Samuel Cahyawijaya},
82
+ year={2024},
83
+ eprint={2406.10118},
84
+ journal={arXiv preprint arXiv: 2406.10118}
85
+ }
86
 
87
+ ```