Commit
•
208f134
1
Parent(s):
0d3b1bd
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -5,13 +5,19 @@ language:
|
|
5 |
- ind
|
6 |
---
|
7 |
|
|
|
|
|
8 |
"In the context of Machine Translation (MT) from-and-to English, Bahasa Indonesia has been considered a low-resource language,
|
|
|
9 |
and therefore applying Neural Machine Translation (NMT) which typically requires large training dataset proves to be problematic.
|
|
|
10 |
In this paper, we show otherwise by collecting large, publicly-available datasets from the Web, which we split into several domains: news, religion, general, and
|
|
|
11 |
conversation,to train and benchmark some variants of transformer-based NMT models across the domains.
|
|
|
12 |
We show using BLEU that our models perform well across them , outperform the baseline Statistical Machine Translation (SMT) models,
|
13 |
-
and perform comparably with Google Translate. Our datasets (with the standard split for training, validation, and testing), code, and models are available on https://github.com/gunnxx/indonesian-mt-data."
|
14 |
|
|
|
15 |
|
16 |
## Dataset Usage
|
17 |
|
@@ -19,7 +25,8 @@ Run `pip install nusacrowd` before loading the dataset through HuggingFace's `lo
|
|
19 |
|
20 |
## Citation
|
21 |
|
22 |
-
|
|
|
23 |
title = "Benchmarking Multidomain {E}nglish-{I}ndonesian Machine Translation",
|
24 |
author = "Guntara, Tri Wahyu and
|
25 |
Aji, Alham Fikri and
|
@@ -42,6 +49,8 @@ Creative Commons Attribution Share-Alike 4.0 International
|
|
42 |
|
43 |
## Homepage
|
44 |
|
|
|
|
|
45 |
### NusaCatalogue
|
46 |
|
47 |
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
|
|
5 |
- ind
|
6 |
---
|
7 |
|
8 |
+
# indo_general_mt_en_id
|
9 |
+
|
10 |
"In the context of Machine Translation (MT) from-and-to English, Bahasa Indonesia has been considered a low-resource language,
|
11 |
+
|
12 |
and therefore applying Neural Machine Translation (NMT) which typically requires large training dataset proves to be problematic.
|
13 |
+
|
14 |
In this paper, we show otherwise by collecting large, publicly-available datasets from the Web, which we split into several domains: news, religion, general, and
|
15 |
+
|
16 |
conversation,to train and benchmark some variants of transformer-based NMT models across the domains.
|
17 |
+
|
18 |
We show using BLEU that our models perform well across them , outperform the baseline Statistical Machine Translation (SMT) models,
|
|
|
19 |
|
20 |
+
and perform comparably with Google Translate. Our datasets (with the standard split for training, validation, and testing), code, and models are available on https://github.com/gunnxx/indonesian-mt-data."
|
21 |
|
22 |
## Dataset Usage
|
23 |
|
|
|
25 |
|
26 |
## Citation
|
27 |
|
28 |
+
```
|
29 |
+
@inproceedings{guntara-etal-2020-benchmarking,
|
30 |
title = "Benchmarking Multidomain {E}nglish-{I}ndonesian Machine Translation",
|
31 |
author = "Guntara, Tri Wahyu and
|
32 |
Aji, Alham Fikri and
|
|
|
49 |
|
50 |
## Homepage
|
51 |
|
52 |
+
[https://github.com/gunnxx/indonesian-mt-data](https://github.com/gunnxx/indonesian-mt-data)
|
53 |
+
|
54 |
### NusaCatalogue
|
55 |
|
56 |
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|