Datasets:

Size:
n<1K
ArXiv:
License:
Jamshidbek Mirzakhalov commited on
Commit
80dd6d1
1 Parent(s): ab251ce

Adding Turkic X-WMT evaluation set for machine translation (#3605)

Browse files

* adding turkic xwmt evaluation dataset

* fixed isort issues

* addressing pr comments

* addressing pr comments

* Apply suggestions from code review

* Update README.md

Co-authored-by: Jamshidbek Mirzakhalov <mirzakhalov@Jamshidbeks-MacBook-Pro.local>
Co-authored-by: Jamshidbek Mirzakhalov <mirzakhalov@Jamshidbeks-MBP.lan>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

Commit from https://github.com/huggingface/datasets/commit/6c89c96617c98d6929f890e776b7da1166f8ba6c

README.md ADDED
@@ -0,0 +1,596 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ az-ba:
8
+ - az
9
+ - ba
10
+ az-en:
11
+ - az
12
+ - en
13
+ az-kaa:
14
+ - az
15
+ - kaa
16
+ az-kk:
17
+ - az
18
+ - kk
19
+ az-ky:
20
+ - az
21
+ - ky
22
+ az-ru:
23
+ - az
24
+ - ru
25
+ az-sah:
26
+ - az
27
+ - sah
28
+ az-tr:
29
+ - az
30
+ - tr
31
+ az-uz:
32
+ - az
33
+ - uz
34
+ ba-az:
35
+ - ba
36
+ - az
37
+ ba-en:
38
+ - ba
39
+ - en
40
+ ba-kaa:
41
+ - ba
42
+ - kaa
43
+ ba-kk:
44
+ - ba
45
+ - kk
46
+ ba-ky:
47
+ - ba
48
+ - ky
49
+ ba-ru:
50
+ - ba
51
+ - ru
52
+ ba-sah:
53
+ - ba
54
+ - sah
55
+ ba-tr:
56
+ - ba
57
+ - tr
58
+ ba-uz:
59
+ - ba
60
+ - uz
61
+ en-az:
62
+ - en
63
+ - az
64
+ en-ba:
65
+ - en
66
+ - ba
67
+ en-kaa:
68
+ - en
69
+ - kaa
70
+ en-kk:
71
+ - en
72
+ - kk
73
+ en-ky:
74
+ - en
75
+ - ky
76
+ en-ru:
77
+ - en
78
+ - ru
79
+ en-sah:
80
+ - en
81
+ - sah
82
+ en-tr:
83
+ - en
84
+ - tr
85
+ en-uz:
86
+ - en
87
+ - uz
88
+ kaa-az:
89
+ - kaa
90
+ - az
91
+ kaa-ba:
92
+ - kaa
93
+ - ba
94
+ kaa-en:
95
+ - kaa
96
+ - en
97
+ kaa-kk:
98
+ - kaa
99
+ - kk
100
+ kaa-ky:
101
+ - kaa
102
+ - ky
103
+ kaa-ru:
104
+ - kaa
105
+ - ru
106
+ kaa-sah:
107
+ - kaa
108
+ - sah
109
+ kaa-tr:
110
+ - kaa
111
+ - tr
112
+ kaa-uz:
113
+ - kaa
114
+ - uz
115
+ kk-az:
116
+ - kk
117
+ - az
118
+ kk-ba:
119
+ - kk
120
+ - ba
121
+ kk-en:
122
+ - kk
123
+ - en
124
+ kk-kaa:
125
+ - kk
126
+ - kaa
127
+ kk-ky:
128
+ - kk
129
+ - ky
130
+ kk-ru:
131
+ - kk
132
+ - ru
133
+ kk-sah:
134
+ - kk
135
+ - sah
136
+ kk-tr:
137
+ - kk
138
+ - tr
139
+ kk-uz:
140
+ - kk
141
+ - uz
142
+ ky-az:
143
+ - ky
144
+ - az
145
+ ky-ba:
146
+ - ky
147
+ - ba
148
+ ky-en:
149
+ - ky
150
+ - en
151
+ ky-kaa:
152
+ - ky
153
+ - kaa
154
+ ky-kk:
155
+ - ky
156
+ - kk
157
+ ky-ru:
158
+ - ky
159
+ - ru
160
+ ky-sah:
161
+ - ky
162
+ - sah
163
+ ky-tr:
164
+ - ky
165
+ - tr
166
+ ky-uz:
167
+ - ky
168
+ - uz
169
+ ru-az:
170
+ - ru
171
+ - az
172
+ ru-ba:
173
+ - ru
174
+ - ba
175
+ ru-en:
176
+ - ru
177
+ - en
178
+ ru-kaa:
179
+ - ru
180
+ - kaa
181
+ ru-kk:
182
+ - ru
183
+ - kk
184
+ ru-ky:
185
+ - ru
186
+ - ky
187
+ ru-sah:
188
+ - ru
189
+ - sah
190
+ ru-tr:
191
+ - ru
192
+ - tr
193
+ ru-uz:
194
+ - ru
195
+ - uz
196
+ sah-az:
197
+ - sah
198
+ - az
199
+ sah-ba:
200
+ - sah
201
+ - ba
202
+ sah-en:
203
+ - sah
204
+ - en
205
+ sah-kaa:
206
+ - sah
207
+ - kaa
208
+ sah-kk:
209
+ - sah
210
+ - kk
211
+ sah-ky:
212
+ - sah
213
+ - ky
214
+ sah-ru:
215
+ - sah
216
+ - ru
217
+ sah-tr:
218
+ - sah
219
+ - tr
220
+ sah-uz:
221
+ - sah
222
+ - uz
223
+ tr-az:
224
+ - tr
225
+ - az
226
+ tr-ba:
227
+ - tr
228
+ - ba
229
+ tr-en:
230
+ - tr
231
+ - en
232
+ tr-kaa:
233
+ - tr
234
+ - kaa
235
+ tr-kk:
236
+ - tr
237
+ - kk
238
+ tr-ky:
239
+ - tr
240
+ - ky
241
+ tr-ru:
242
+ - tr
243
+ - ru
244
+ tr-sah:
245
+ - tr
246
+ - sah
247
+ tr-uz:
248
+ - tr
249
+ - uz
250
+ uz-az:
251
+ - uz
252
+ - az
253
+ uz-ba:
254
+ - uz
255
+ - ba
256
+ uz-en:
257
+ - uz
258
+ - en
259
+ uz-kaa:
260
+ - uz
261
+ - kaa
262
+ uz-kk:
263
+ - uz
264
+ - kk
265
+ uz-ky:
266
+ - uz
267
+ - ky
268
+ uz-ru:
269
+ - uz
270
+ - ru
271
+ uz-sah:
272
+ - uz
273
+ - sah
274
+ uz-tr:
275
+ - uz
276
+ - tr
277
+ licenses:
278
+ - mit
279
+ multilinguality:
280
+ - translation
281
+ pretty_name: turkic_xwmt
282
+ size_categories:
283
+ - n<1K
284
+ task_categories:
285
+ - conditional-text-generation
286
+ task_ids:
287
+ - machine-translation
288
+ source_datasets:
289
+ - extended|WMT 2020 News Translation Task
290
+ ---
291
+
292
+ # Dataset Card for turkic_xwmt
293
+
294
+ ## Table of Contents
295
+ - [Table of Contents](#table-of-contents)
296
+ - [Dataset Description](#dataset-description)
297
+ - [Dataset Summary](#dataset-summary)
298
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
299
+ - [Languages](#languages)
300
+ - [Dataset Structure](#dataset-structure)
301
+ - [Data Instances](#data-instances)
302
+ - [Data Fields](#data-fields)
303
+ - [Data Splits](#data-splits)
304
+ - [Dataset Creation](#dataset-creation)
305
+ - [Curation Rationale](#curation-rationale)
306
+ - [Source Data](#source-data)
307
+ - [Annotations](#annotations)
308
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
309
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
310
+ - [Social Impact of Dataset](#social-impact-of-dataset)
311
+ - [Discussion of Biases](#discussion-of-biases)
312
+ - [Other Known Limitations](#other-known-limitations)
313
+ - [Additional Information](#additional-information)
314
+ - [Dataset Curators](#dataset-curators)
315
+ - [Licensing Information](#licensing-information)
316
+ - [Citation Information](#citation-information)
317
+ - [Contributions](#contributions)
318
+
319
+ ## Dataset Description
320
+
321
+ - **Homepage:**
322
+ - **Repository:**[Github](https://github.com/turkic-interlingua/til-mt/tree/master/xwmt)
323
+ - **Paper:** [https://arxiv.org/abs/2109.04593](https://arxiv.org/abs/2109.04593)
324
+ - **Leaderboard:** [More Information Needed]
325
+ - **Point of Contact:** [turkicinterlingua@gmail.com](mailto:turkicinterlingua@gmail.com)
326
+
327
+ ### Dataset Summary
328
+
329
+ To establish a comprehensive and challenging evaluation benchmark for Machine Translation in Turkic languages, we translate a test set originally introduced in WMT 2020 News Translation Task for English-Russian. The original dataset is profesionally translated and consists of sentences from news articles that are both English and Russian-centric. We adopt this evaluation set (X-WMT) and begin efforts to translate it into several Turkic languages. The current version of X-WMT includes covers 8 Turkic languages and 88 language directions with a minimum of 300 sentences per language direction.
330
+
331
+ ### Supported Tasks and Leaderboards
332
+
333
+ [More Information Needed]
334
+
335
+ ### Languages
336
+
337
+ Currently covered languages are (besides English and Russian):
338
+ - Azerbaijani (az)
339
+ - Bashkir (ba)
340
+ - Karakalpak (kaa)
341
+ - Kazakh (kk)
342
+ - Kirghiz (ky)
343
+ - Turkish (tr)
344
+ - Sakha (sah)
345
+ - Uzbek (uz)
346
+
347
+ ## Dataset Structure
348
+
349
+ ### Data Instances
350
+
351
+ A random example from the Russian-Uzbek set:
352
+ ```
353
+ {"translation": {'ru': 'Моника Мутсвангва , министр информации Зимбабве , утверждает , что полиция вмешалась в отъезд Магомбейи из соображений безопасности и вследствие состояния его здоровья .', 'uz': 'Zimbabvening Axborot vaziri , Monika Mutsvanva Magombeyining xavfsizligi va sog'ligi tufayli bo'lgan jo'nab ketishinida politsiya aralashuvini ushlab turadi .'}}
354
+ ```
355
+ ### Data Fields
356
+
357
+ Each example has one field "translation" that contains two subfields: one per language, e.g. for the Russian-Uzbek set:
358
+ - **translation**: a dictionary with two subfields:
359
+ - **ru**: the russian text
360
+ - **uz**: the uzbek text
361
+ ### Data Splits
362
+
363
+ <details>
364
+ <summary>Click here to show the number of examples per configuration:</summary>
365
+ | | test |
366
+ |:--------|-------:|
367
+ | az-ba | 600 |
368
+ | az-en | 600 |
369
+ | az-kaa | 300 |
370
+ | az-kk | 500 |
371
+ | az-ky | 500 |
372
+ | az-ru | 600 |
373
+ | az-sah | 300 |
374
+ | az-tr | 500 |
375
+ | az-uz | 600 |
376
+ | ba-az | 600 |
377
+ | ba-en | 1000 |
378
+ | ba-kaa | 300 |
379
+ | ba-kk | 700 |
380
+ | ba-ky | 500 |
381
+ | ba-ru | 1000 |
382
+ | ba-sah | 300 |
383
+ | ba-tr | 700 |
384
+ | ba-uz | 900 |
385
+ | en-az | 600 |
386
+ | en-ba | 1000 |
387
+ | en-kaa | 300 |
388
+ | en-kk | 700 |
389
+ | en-ky | 500 |
390
+ | en-ru | 1000 |
391
+ | en-sah | 300 |
392
+ | en-tr | 700 |
393
+ | en-uz | 900 |
394
+ | kaa-az | 300 |
395
+ | kaa-ba | 300 |
396
+ | kaa-en | 300 |
397
+ | kaa-kk | 300 |
398
+ | kaa-ky | 300 |
399
+ | kaa-ru | 300 |
400
+ | kaa-sah | 300 |
401
+ | kaa-tr | 300 |
402
+ | kaa-uz | 300 |
403
+ | kk-az | 500 |
404
+ | kk-ba | 700 |
405
+ | kk-en | 700 |
406
+ | kk-kaa | 300 |
407
+ | kk-ky | 500 |
408
+ | kk-ru | 700 |
409
+ | kk-sah | 300 |
410
+ | kk-tr | 500 |
411
+ | kk-uz | 700 |
412
+ | ky-az | 500 |
413
+ | ky-ba | 500 |
414
+ | ky-en | 500 |
415
+ | ky-kaa | 300 |
416
+ | ky-kk | 500 |
417
+ | ky-ru | 500 |
418
+ | ky-sah | 300 |
419
+ | ky-tr | 400 |
420
+ | ky-uz | 500 |
421
+ | ru-az | 600 |
422
+ | ru-ba | 1000 |
423
+ | ru-en | 1000 |
424
+ | ru-kaa | 300 |
425
+ | ru-kk | 700 |
426
+ | ru-ky | 500 |
427
+ | ru-sah | 300 |
428
+ | ru-tr | 700 |
429
+ | ru-uz | 900 |
430
+ | sah-az | 300 |
431
+ | sah-ba | 300 |
432
+ | sah-en | 300 |
433
+ | sah-kaa | 300 |
434
+ | sah-kk | 300 |
435
+ | sah-ky | 300 |
436
+ | sah-ru | 300 |
437
+ | sah-tr | 300 |
438
+ | sah-uz | 300 |
439
+ | tr-az | 500 |
440
+ | tr-ba | 700 |
441
+ | tr-en | 700 |
442
+ | tr-kaa | 300 |
443
+ | tr-kk | 500 |
444
+ | tr-ky | 400 |
445
+ | tr-ru | 700 |
446
+ | tr-sah | 300 |
447
+ | tr-uz | 600 |
448
+ | uz-az | 600 |
449
+ | uz-ba | 900 |
450
+ | uz-en | 900 |
451
+ | uz-kaa | 300 |
452
+ | uz-kk | 700 |
453
+ | uz-ky | 500 |
454
+ | uz-ru | 900 |
455
+ | uz-sah | 300 |
456
+ | uz-tr | 600 |
457
+ </details>
458
+
459
+ ## Dataset Creation
460
+
461
+ ### Curation Rationale
462
+
463
+ [More Information Needed]
464
+
465
+ ### Source Data
466
+
467
+ #### Initial Data Collection and Normalization
468
+
469
+ [More Information Needed]
470
+
471
+ #### Who are the source language producers?
472
+
473
+ [More Information Needed]
474
+
475
+ ### Annotations
476
+
477
+ #### Annotation process
478
+
479
+ [More Information Needed]
480
+
481
+ #### Who are the annotators?
482
+
483
+ **Translators, annotators and dataset contributors** (in alphabetical order)
484
+
485
+ Abilxayr Zholdybai
486
+ Aigiz Kunafin
487
+ Akylbek Khamitov
488
+ Alperen Cantez
489
+ Aydos Muxammadiyarov
490
+ Doniyorbek Rafikjonov
491
+ Erkinbek Vokhabov
492
+ Ipek Baris
493
+ Iskander Shakirov
494
+ Madina Zokirjonova
495
+ Mohiyaxon Uzoqova
496
+ Mukhammadbektosh Khaydarov
497
+ Nurlan Maharramli
498
+ Petr Popov
499
+ Rasul Karimov
500
+ Sariya Kagarmanova
501
+ Ziyodabonu Qobiljon qizi
502
+
503
+ ### Personal and Sensitive Information
504
+
505
+ [More Information Needed]
506
+
507
+ ## Considerations for Using the Data
508
+
509
+ ### Social Impact of Dataset
510
+
511
+ [More Information Needed]
512
+
513
+ ### Discussion of Biases
514
+
515
+ [More Information Needed]
516
+
517
+ ### Other Known Limitations
518
+
519
+ [More Information Needed]
520
+
521
+ ## Additional Information
522
+
523
+ ### Dataset Curators
524
+
525
+ [More Information Needed]
526
+
527
+ ### Licensing Information
528
+
529
+ [MIT License](https://github.com/turkic-interlingua/til-mt/blob/master/xwmt/LICENSE)
530
+
531
+ ### Citation Information
532
+
533
+ ```
534
+ @inproceedings{mirzakhalov2021large,
535
+ title={A Large-Scale Study of Machine Translation in Turkic Languages},
536
+ author={Mirzakhalov, Jamshidbek and Babu, Anoop and Ataman, Duygu and Kariev, Sherzod and Tyers, Francis and Abduraufov, Otabek and Hajili, Mammad and Ivanova, Sardana and Khaytbaev, Abror and Laverghetta Jr, Antonio and others},
537
+ booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
538
+ pages={5876--5890},
539
+ year={2021}
540
+ }
541
+ ```
542
+
543
+ ### Contributions
544
+
545
+ This project was carried out with the help and contributions from dozens of individuals and organizations. We acknowledge and greatly appreciate each and every one of them:
546
+
547
+ **Authors on the publications** (in alphabetical order)
548
+
549
+ Abror Khaytbaev
550
+ Ahsan Wahab
551
+ Aigiz Kunafin
552
+ Anoop Babu
553
+ Antonio Laverghetta Jr.
554
+ Behzodbek Moydinboyev
555
+ Dr. Duygu Ataman
556
+ Esra Onal
557
+ Dr. Francis Tyers
558
+ Jamshidbek Mirzakhalov
559
+ Dr. John Licato
560
+ Dr. Julia Kreutzer
561
+ Mammad Hajili
562
+ Mokhiyakhon Uzokova
563
+ Dr. Orhan Firat
564
+ Otabek Abduraufov
565
+ Sardana Ivanova
566
+ Shaxnoza Pulatova
567
+ Sherzod Kariev
568
+ Dr. Sriram Chellappan
569
+
570
+ **Translators, annotators and dataset contributors** (in alphabetical order)
571
+
572
+ Abilxayr Zholdybai
573
+ Aigiz Kunafin
574
+ Akylbek Khamitov
575
+ Alperen Cantez
576
+ Aydos Muxammadiyarov
577
+ Doniyorbek Rafikjonov
578
+ Erkinbek Vokhabov
579
+ Ipek Baris
580
+ Iskander Shakirov
581
+ Madina Zokirjonova
582
+ Mohiyaxon Uzoqova
583
+ Mukhammadbektosh Khaydarov
584
+ Nurlan Maharramli
585
+ Petr Popov
586
+ Rasul Karimov
587
+ Sariya Kagarmanova
588
+ Ziyodabonu Qobiljon qizi
589
+
590
+ **Industry supporters**
591
+
592
+ [Google Cloud](https://cloud.google.com/solutions/education)
593
+ [Khan Academy Oʻzbek](https://uz.khanacademy.org/)
594
+ [The Foundation for the Preservation and Development of the Bashkir Language](https://bsfond.ru/)
595
+
596
+ Thanks to [@mirzakhalov](https://github.com/mirzakhalov) for adding this dataset.
dataset_infos.json ADDED
The diff for this file is too large to render. See raw diff
 
dummy/az-ba/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ee388fc7e097821af7507d978f3b7de3b02ff9e8c65001ec85307bf84ac1c82
3
+ size 150606
turkic_xwmt.py ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """A Large-Scale Study of Machine Translation in Turkic Languages."""
18
+ import os
19
+
20
+ import datasets
21
+
22
+
23
+ _LANGUAGES = ["az", "ba", "en", "kaa", "kk", "ky", "ru", "sah", "tr", "uz"]
24
+
25
+ _DESCRIPTION = """\
26
+ A Large-Scale Study of Machine Translation in Turkic Languages
27
+ """
28
+
29
+ _CITATION = """\
30
+ @inproceedings{mirzakhalov2021large,
31
+ title={A Large-Scale Study of Machine Translation in Turkic Languages},
32
+ author={Mirzakhalov, Jamshidbek and Babu, Anoop and Ataman, Duygu and Kariev, Sherzod and Tyers, Francis and Abduraufov, Otabek and Hajili, Mammad and Ivanova, Sardana and Khaytbaev, Abror and Laverghetta Jr, Antonio and others},
33
+ booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
34
+ pages={5876--5890},
35
+ year={2021}
36
+ }
37
+ """
38
+
39
+ _DATA_URL = (
40
+ "https://github.com/turkic-interlingua/til-mt/blob/6ac179350448895a63cc06fcfd1135882c8cc49b/xwmt/test.zip?raw=true"
41
+ )
42
+
43
+
44
+ class XWMTConfig(datasets.BuilderConfig):
45
+ """BuilderConfig for XWMT."""
46
+
47
+ def __init__(self, language_pair=(None, None), **kwargs):
48
+ """BuilderConfig for XWMT.
49
+
50
+ Args:
51
+ for the `datasets.features.text.TextEncoder` used for the features feature.
52
+ language_pair: pair of languages that will be used for translation. Should
53
+ contain 2-letter coded strings.
54
+ **kwargs: keyword arguments forwarded to super.
55
+ """
56
+ name = "%s-%s" % (language_pair[0], language_pair[1])
57
+
58
+ description = ("Translation dataset from %s to %s") % (language_pair[0], language_pair[1])
59
+ super(XWMTConfig, self).__init__(
60
+ name=name,
61
+ description=description,
62
+ version=datasets.Version("1.1.0", ""),
63
+ **kwargs,
64
+ )
65
+
66
+ # Validate language pair.
67
+ source, target = language_pair
68
+
69
+ assert source in _LANGUAGES, ("Config language pair must be one of the supported languages, got: %s", source)
70
+ assert target in _LANGUAGES, ("Config language pair must be one of the supported languages, got: %s", source)
71
+
72
+ self.language_pair = language_pair
73
+
74
+
75
+ class TurkicXWMT(datasets.GeneratorBasedBuilder):
76
+ """XWMT machine translation dataset."""
77
+
78
+ BUILDER_CONFIGS = [
79
+ XWMTConfig(
80
+ language_pair=(lang1, lang2),
81
+ )
82
+ for lang1 in _LANGUAGES
83
+ for lang2 in _LANGUAGES
84
+ if lang1 != lang2
85
+ ]
86
+
87
+ def _info(self):
88
+ source, target = self.config.language_pair
89
+ return datasets.DatasetInfo(
90
+ description=_DESCRIPTION,
91
+ features=datasets.Features(
92
+ {"translation": datasets.features.Translation(languages=self.config.language_pair)}
93
+ ),
94
+ supervised_keys=(source, target),
95
+ homepage="https://github.com/turkicinterlingua/til-mt",
96
+ citation=_CITATION,
97
+ )
98
+
99
+ def _split_generators(self, dl_manager):
100
+ path = dl_manager.download_and_extract(_DATA_URL)
101
+
102
+ source, target = self.config.language_pair
103
+ source_path = os.path.join(path, "test", f"{source}-{target}", f"{source}-{target}.{source}.txt")
104
+ target_path = os.path.join(path, "test", f"{source}-{target}", f"{source}-{target}.{target}.txt")
105
+
106
+ files = {}
107
+ files["test"] = {
108
+ "source_file": source_path,
109
+ "target_file": target_path,
110
+ }
111
+
112
+ return [
113
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs=files["test"]),
114
+ ]
115
+
116
+ def _generate_examples(self, source_file, target_file):
117
+ """This function returns the examples in the raw (text) form."""
118
+ source_sentences, target_sentences = None, None
119
+ source_sentences = open(source_file, encoding="utf-8").read().strip().split("\n")
120
+ target_sentences = open(target_file, encoding="utf-8").read().strip().split("\n")
121
+
122
+ assert len(target_sentences) == len(source_sentences), "Sizes do not match: %d vs %d for %s vs %s." % (
123
+ len(source_sentences),
124
+ len(target_sentences),
125
+ source_file,
126
+ target_file,
127
+ )
128
+
129
+ source, target = self.config.language_pair
130
+ for idx, (l1, l2) in enumerate(zip(source_sentences, target_sentences)):
131
+ result = {"translation": {source: l1, target: l2}}
132
+ # Make sure that both translations are non-empty.
133
+ if all(result.values()):
134
+ yield idx, result