RicardoRei commited on
Commit
c299be4
1 Parent(s): 27f0b91

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -1
README.md CHANGED
@@ -1,7 +1,19 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
 
 
5
  This dataset contains all MQM human annotations from previous [WMT Metrics shared tasks](https://wmt-metrics-task.github.io/) and the MQM annotations from [Experts, Errors, and Context](https://aclanthology.org/2021.tacl-1.87/).
6
 
7
  The data is organised into 8 columns:
@@ -13,6 +25,34 @@ The data is organised into 8 columns:
13
  - system: MT Engine that produced the translation
14
  - annotators: number of annotators
15
  - domain: domain of the input text (e.g. news)
16
-
 
17
  You can also find the original data [here](https://github.com/google/wmt-mqm-human-evaluation). We recommend using the original repo if you are interested in annotation spans and not just the final score.
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ - de
6
+ - ru
7
+ - zh
8
+ tags:
9
+ - mt-evaluation
10
+ - WMT
11
+ size_categories:
12
+ - 100K<n<1M
13
  ---
14
 
15
+ # Dataset Summary
16
+
17
  This dataset contains all MQM human annotations from previous [WMT Metrics shared tasks](https://wmt-metrics-task.github.io/) and the MQM annotations from [Experts, Errors, and Context](https://aclanthology.org/2021.tacl-1.87/).
18
 
19
  The data is organised into 8 columns:
 
25
  - system: MT Engine that produced the translation
26
  - annotators: number of annotators
27
  - domain: domain of the input text (e.g. news)
28
+ - year: collection year
29
+
30
  You can also find the original data [here](https://github.com/google/wmt-mqm-human-evaluation). We recommend using the original repo if you are interested in annotation spans and not just the final score.
31
 
32
+
33
+ ## Python usage:
34
+
35
+ ```python
36
+ from datasets import load_dataset
37
+ dataset = load_dataset("RicardoRei/wmt-mqm-human-evaluation", split="train")
38
+ ```
39
+
40
+ There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
41
+
42
+ ```python
43
+ # split by year
44
+ data = dataset.filter(lambda example: example["year"] == 2022)
45
+
46
+ # split by LP
47
+ data = dataset.filter(lambda example: example["lp"] == "en-de")
48
+
49
+ # split by domain
50
+ data = dataset.filter(lambda example: example["domain"] == "ted")
51
+ ```
52
+
53
+ ## Citation Information
54
+
55
+ If you use this data please cite the following works:
56
+ - [Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation](https://aclanthology.org/2021.tacl-1.87/)
57
+ - [Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain](https://aclanthology.org/2021.wmt-1.73/)
58
+ - [Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust](https://aclanthology.org/2022.wmt-1.2/)