Datasets:
QE4PE Pretask
The goal of the pretask
is to familiarize translators with the GroTE interface, indentify potential issue and inform the modality assignment for the main
task to ensure a uniform distribution of editing times across modalities. Refer to the translators' guidelines for additional details about the task.
Folder Structure
pretask/
├── inputs/
│ ├── eng-ita/
│ │ ├── pretask_eng-ita_doc1_input.txt
│ │ ├── pretask_eng-ita_doc2_input.txt
│ │ └── ... # GroTE input files with tags and ||| source-target separator
│ └── eng-nld/
│ │ ├── pretask_eng-nld_doc1_input.txt
│ │ ├── pretask_eng-nld_doc2_input.txt
│ │ └── ... # GroTE input files with tags and ||| source-target separator
├── outputs/
│ ├── eng-ita/
│ │ ├── logs/
│ │ │ ├── pretask_eng-ita_t1_logs.csv
│ │ │ └── ... # GroTE logs for every translator (e.g. t1)
│ │ ├── metrics/
│ │ │ ├── pretask_eng-ita_t1_metrics.csv
│ │ │ └── ... # Metrics for every translator (e.g. t1)
│ │ ├── pretask_eng-ita_doc1_t1_output.txt
│ │ └── ... # GroTE output files (one edited target per line)
│ └── eng-nld/
│ ├── logs/
│ │ ├── pretask_eng-nld_t1_logs.csv
│ │ └── ... # GroTE logs for every translator (e.g. t1)
│ ├── metrics/
│ │ ├── pretask_eng-nld_t1_metrics.csv
│ │ └── ... # Metrics for every translator (e.g. t1)
│ ├── example_eng-nld_doc1_t1_output.txt
│ └── ... # GroTE output files (one post-edited segment per line)
├── doc_id_map.json # Source and doc name maps
├── main_task_assignments.json # Translator assignments to main task modalities
├── pretask_eng-ita_guidelines.pdf # Task guidelines for translators
├── pretask_eng-nld_guidelines.pdf # Task guidelines for translators
└── README.md
Inputs
The pretask
uses six documents containing between 6-10 contiguous segments from the same wmt23
collection of the main
task that were matching all the main
task requirements, but were not selected for the main collection. doc_id_map.json shows the document assignments from the original collection.
We use the supervised
highlights produced by XCOMET-XXL to familiarize translators with editing highlighted texts with highlights being assumed to be of a good but not "perfect" quality (as opposed to oracle
). Word-level error spans are extended to match closest word boundaries whenever necessary. Input files for the task have names using the format:
"{{TASK_ID}}_{{TRANSLATION_DIRECTION}}_{{DOC_ID}}_input.txt"
Main Task Assignments
From the pretask
results, we create 3 groups per translation direction representing faster (1), average (2) and slower (3) translators by splitting the time-ranked list of translators in three blocks of equal size. In every block, we assign a modality to each translator randomly (random seed = 42). All gaps >5 minutes in the logs are omitted from the calculation to account for accidental AFK time during logging.
The final assignments (also in main_task_assignments.json
for machine-readable format) are:
English - Italian
Name | Time | Modality | Alias |
---|---|---|---|
Group 1 | |||
t8 | 37m | supervised | supervised_t1 |
t11 | 45m | oracle | oracle_t1 |
t10 | 51m | no_highlight | no_highlight_t1 |
t7 | 62m | unsupervised | unsupervised_t1 |
Group 2 | |||
t1 | 68m | oracle | oracle_t2 |
t5 | 70m | unsupervised | unsupervised_t2 |
t6 | 74m | supervised | supervised_t2 |
t12 | 100m | no_highlight | no_highlight_t2 |
Group 3 | |||
t9 | 106m | no_highlight | no_highlight_t3 |
t3 | 122m | oracle | oracle_t3 |
t2 | 164m | unsupervised | unsupervised_t3 |
t4 | 185m | supervised | supervised_t3 |
English - Dutch
Name | Time | Modality | Alias |
---|---|---|---|
Group 1 | |||
t4 | 23m | oracle | oracle_t1 |
t12 | 27m | supervised | supervised_t1 |
t1 | 31m | no_highlight | no_highlight_t1 |
t7 | 36m | unsupervised | unsupervised_t1 |
Group 2 | |||
t8 | 44m | supervised | supervised_t2 |
t2 | 44m | unsupervised | unsupervised_t2 |
t11 | 65m | oracle | oracle_t2 |
t5 | 66m | no_highlight | no_highlight_t2 |
Group 3 | |||
t9 | 68m | no_highlight | no_highlight_t3 |
t6 | 76m | supervised | supervised_t3 |
t3 | 103m | unsupervised | unsupervised_t3 |
t10 | 152m | oracle | oracle_t3 |
t13
was added for Dutch as a replacement fort9
, which compromised a part of the task by not following the guidelines.t13
was assigned tono_highlight
usingno_highlight_t4
as an Alias. The pre-task editing time fort13
was 19 minutes.
Outputs
Files in outputs/eng-ita
and outputs/eng-nld
contain post-edited outputs (one per line, matching inputs) using format:
"{{TASK_ID}}_{{TRANSLATION_DIRECTION}}_{{DOC_ID}}_{{TRANSLATOR_ID}}_output.txt"
The contents of outputs/{{TRANSLATION_DIRECTION}}/logs
can be used to analyze editing behavior at a granular scale. Each log file has format:
"{{TASK_ID}}_{{TRANSLATION_DIRECTION}}_{{TRANSLATOR_ID}}_logs.csv"
Refer to GroTE documentation for more information about the logs.
The contents of outputs/{{TRANSLATION_DIRECTION}}/metrics
contain aggregated metrics for each translator, with name format:
"{{TASK_ID}}_{{TRANSLATION_DIRECTION}}_{{TRANSLATOR_ID}}_metrics.csv"
Metrics include:
max/min/mean/std of BLEU, chrF, TER and COMET scores computed between the MT or the PE texts and all (other, in case of PE) post-edited variants of the same segment for
{{TRANSLATION_DIRECTION}}
.XCOMET-XXL segment-level QE scores and errors for the MT and PE texts.