ACL-OCL / Base_JSON /prefixN /json /ngt /2020.ngt-1.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:51.147646Z"
},
"title": "Generating Diverse Translations via Weighted Fine-tuning and Hypotheses Filtering for the Duolingo STAPLE Task",
"authors": [
{
"first": "Sweta",
"middle": [],
"last": "Agrawal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": "marine@cs.umd.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the University of Maryland's submission to the Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). Unlike the standard machine translation task, STAPLE requires generating a set of outputs for a given input sequence, aiming to cover the space of translations produced by language learners. We adapt neural machine translation models to this requirement by (a) generating n-best translation hypotheses from a model fine-tuned on learner translations, oversampled to reflect the distribution of learner responses, and (b) filtering hypotheses using a feature-rich binary classifier that directly optimizes a close approximation of the official evaluation metric. Combination of systems that use these two strategies achieves F1 scores of 53.9% and 52.5% on Vietnamese and Portuguese, respectively ranking 2 nd and 4 th on the leaderboard.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the University of Maryland's submission to the Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). Unlike the standard machine translation task, STAPLE requires generating a set of outputs for a given input sequence, aiming to cover the space of translations produced by language learners. We adapt neural machine translation models to this requirement by (a) generating n-best translation hypotheses from a model fine-tuned on learner translations, oversampled to reflect the distribution of learner responses, and (b) filtering hypotheses using a feature-rich binary classifier that directly optimizes a close approximation of the official evaluation metric. Combination of systems that use these two strategies achieves F1 scores of 53.9% and 52.5% on Vietnamese and Portuguese, respectively ranking 2 nd and 4 th on the leaderboard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "While machine translation (MT) typically produces a single output for each input, scoring and generation for second language learning applications might benefit from systems whose outputs better capture the diversity of translations produced by language learners. The Duolingo Simultaneous Translation And Paraphrase for Language Education (STAPLE) shared task (Mayhew et al., 2020) provides a framework for developing and testing such systems, grounded in real translations produced by English learners into five native languages (Portuguese, Vietnamese, Hungarian, Japanese, Korean). In this task, given an English sentence prompt, systems are asked to produce a set of translations for that prompt, and are scored based on how well their outputs cover human-curated acceptable translations, weighted by the likelihood that an English learner would respond with each translation (Table 1) .",
"cite_spans": [
{
"start": 361,
"end": 382,
"text": "(Mayhew et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 881,
"end": 890,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Output minha explica\u00e7\u00e3o est\u00e1 clara? 0.267 minha explica\u00e7\u00e3o \u00e9 clara? 0.161 a minha explica\u00e7\u00e3o est\u00e1 clara? 0.111 a minha explica\u00e7\u00e3o \u00e9 clara? 0.088 minha explana\u00e7\u00e3o est\u00e1 clara? 0.057 est\u00e1 clara minha explica\u00e7\u00e3o? 0.044 minha explana\u00e7\u00e3o \u00e9 clara? 0.039 While the multiple translations can be viewed as paraphrases, we propose to address the STAPLE task primarily as a MT task to better understand the strengths and weaknesses of neural MT architectures for generating multiple learner-relevant translations. Given a Transformer model for the language pair of interest, we use beam search to generate n-best translation candidates. However, since n-best lists are known to lack diversity, we propose to generate hypotheses that better match the requirements of the STAPLE task via:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prompt is my explanation clear?",
"sec_num": null
},
{
"text": "1. Frequency-Aware n-Best Lists: We encourage hypotheses to reflect the diversity and frequency of learner responses by fine-tuning models on STAPLE data, oversampling translation options to reflect learner preferences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prompt is my explanation clear?",
"sec_num": null
},
{
"text": "We filter the resulting n-best lists using a binary classifier which identifies good translations that are likely to be produced by a learner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Filtering:",
"sec_num": "2."
},
{
"text": "Controlled experiments and analysis show the benefits of both strategies. Our final submission which includes both techniques achieves F1 scores of 53.9% and 52.5% for en-vi and en-pt respec-tively, reaching a rank of 2 nd and 4 th on the leaderboard, only 2 points below the top scoring system. For completeness, we also submitted systems for the remaining language pairs using Frequency-Aware n-best lists: our system ranked 2 nd for Japanese and 3 rd for Korean and Hungarian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Filtering:",
"sec_num": "2."
},
{
"text": "Unlike in the STAPLE task, recent attempts at generating multiple translations for a single source have targeted output variability along specific stylistic dimensions (Sennrich et al., 2016b; Rabinovich et al., 2016; Agrawal and Carpuat, 2019) or produce diverse outputs without a specific use case (Kikuchi et al., 2016; Shu et al., 2019) . The techniques used can be divided in three categories: (a) constrain the decoding process to generate diverse candidates (Li and Jurafsky, 2016; Li et al., 2015; Cho, 2016) ; (b) optimize via a diversity promoting loss function (Li et al., 2015) ; (c) expose the model to different translation candidates with side-constraints (Rabinovich et al., 2016; Sennrich et al., 2016a; Agrawal and Carpuat, 2019; Shu et al., 2019) or without (Shen et al., 2019) . Since it is unclear what dimensions of variations are captured in the STAPLE translation, we focus instead on improving n-best lists generated by a standard neural MT model. Source texts with multiple references have mostly been used to evaluate rather than train MT systems (Papineni et al., 2002; Banerjee and Lavie, 2005; Qin and Specia, 2015) . Evaluation sets with 4 or 5 references have been converted to singlereference training samples (Zheng et al., 2018) to improve MT training, but reference translations vary in arbitrary ways and often exhibit poor diversity, mostly limited to translationese effects. The STAPLE data presents an opportunity to explore multiple translations generated in a more comprehensive fashion.",
"cite_spans": [
{
"start": 168,
"end": 192,
"text": "(Sennrich et al., 2016b;",
"ref_id": "BIBREF18"
},
{
"start": 193,
"end": 217,
"text": "Rabinovich et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 218,
"end": 244,
"text": "Agrawal and Carpuat, 2019)",
"ref_id": "BIBREF0"
},
{
"start": 300,
"end": 322,
"text": "(Kikuchi et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 323,
"end": 340,
"text": "Shu et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 465,
"end": 488,
"text": "(Li and Jurafsky, 2016;",
"ref_id": "BIBREF9"
},
{
"start": 489,
"end": 505,
"text": "Li et al., 2015;",
"ref_id": "BIBREF8"
},
{
"start": 506,
"end": 516,
"text": "Cho, 2016)",
"ref_id": "BIBREF3"
},
{
"start": 572,
"end": 589,
"text": "(Li et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 671,
"end": 696,
"text": "(Rabinovich et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 697,
"end": 720,
"text": "Sennrich et al., 2016a;",
"ref_id": "BIBREF17"
},
{
"start": 721,
"end": 747,
"text": "Agrawal and Carpuat, 2019;",
"ref_id": "BIBREF0"
},
{
"start": 748,
"end": 765,
"text": "Shu et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 777,
"end": 796,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 1074,
"end": 1097,
"text": "(Papineni et al., 2002;",
"ref_id": "BIBREF14"
},
{
"start": 1098,
"end": 1123,
"text": "Banerjee and Lavie, 2005;",
"ref_id": "BIBREF1"
},
{
"start": 1124,
"end": 1145,
"text": "Qin and Specia, 2015)",
"ref_id": "BIBREF15"
},
{
"start": 1243,
"end": 1263,
"text": "(Zheng et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "While neural MT systems can generate multiple translation candidates per source using beam search, the n-best translations often lack diversity. One issue is that systems are trained on singletranslation training samples. We propose to tailor MT to the STAPLE task by fine-tuning models on LRF-weighted multi-reference samples to obtain more diverse translations and a ranking that better reflect learner preferences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Aware Hypotheses Generation",
"sec_num": "3.1"
},
{
"text": "Given the STAPLE data for a language pair, where the i-th training example, (e i , F i , W i ) includes a source sentence in English, a reference set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Aware Hypotheses Generation",
"sec_num": "3.1"
},
{
"text": "F i = {f 1 i , f 2 i , ..., f K i } of K translations and corre- sponding LRF weights W i = {w 1 i , w 2 i , ..., w K i }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Aware Hypotheses Generation",
"sec_num": "3.1"
},
{
"text": ", we create MT training samples by copying the translation pair (e i , f j i ), w j i \u00d7 O times. 1 Given model parameters \u03b8, this yields a weighted crossentropy loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Aware Hypotheses Generation",
"sec_num": "3.1"
},
{
"text": "L lrf (\u03b8) = M i=1 K j=1 (w j i \u00d7 O) log P r(f j i |e i ; \u03b8) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Aware Hypotheses Generation",
"sec_num": "3.1"
},
{
"text": "Even when informed by STAPLE data and LRF scores, n-best lists might include translations that are not in the reference set, due to translation errors or selecting paraphrases that do not match language learners' preferences. We design a binary classifier that further filters the n-best lists by predicting for each hypothesis whether or not it should be included in the final set. This lets us define features based on the complete prompt and hypothesis sequence pairs, while the MT model generates the hypothesis incrementally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Filtering as Binary Classification",
"sec_num": "3.2"
},
{
"text": "Let D = {(e i ,f 1 i ,f 2 i , ...,f N i )} M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Filtering as Binary Classification",
"sec_num": "3.2"
},
{
"text": "1 represent the n-best list generated via beam search for all the source prompts in the training dataset: e i corresponds to the i-th source prompt andf j i corresponds to the j-th candidate hypothesis extracted via beam search. x i j represents the feature vector extracted from the source (e i ) and j-th candidate hypothesis (f j i ) and y j i is a binary label indicative of whether the candidate hypothesis,f j i , is found in the gold standard data. The classification model f : X \u2192 R maps the feature vector to a real value, where, f is a two-layer Neural Network (NN) to enable learning feature combinations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Filtering as Binary Classification",
"sec_num": "3.2"
},
{
"text": "Features We aim to capture the quality of a source-hypothesis pair using multiple sentencelevel features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Filtering as Binary Classification",
"sec_num": "3.2"
},
{
"text": "\u2022 Length features |f |, |e|, |f | |e| , |e| |f |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Filtering as Binary Classification",
"sec_num": "3.2"
},
{
"text": "might indicate mismatches between source and target content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Filtering as Binary Classification",
"sec_num": "3.2"
},
{
"text": "\u2022 Word alignment features have proved useful to identify semantic divergences in bitext (Munteanu and Marcu, 2005; Vyas et al., 2018) . We use the Forward and Reverse Alignment score, the count of unaligned words for source and target, and the top three largest fertilities for source and target.",
"cite_spans": [
{
"start": 88,
"end": 114,
"text": "(Munteanu and Marcu, 2005;",
"ref_id": "BIBREF11"
},
{
"start": 115,
"end": 133,
"text": "Vyas et al., 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Filtering as Binary Classification",
"sec_num": "3.2"
},
{
"text": "\u2022 Scores from various MT models as often done when reranking n-best lists (Cherry and Foster, 2012; Neubig et al., 2015; Hassan et al., 2018) including a left-to-right model, a right-to-left model, and a target-to-source model, which provide different views of the example and might better estimate the adequacy of the translation than the original MT model score.",
"cite_spans": [
{
"start": 74,
"end": 99,
"text": "(Cherry and Foster, 2012;",
"ref_id": "BIBREF2"
},
{
"start": 100,
"end": 120,
"text": "Neubig et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 121,
"end": 141,
"text": "Hassan et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Filtering as Binary Classification",
"sec_num": "3.2"
},
{
"text": "\u2022 Target 5-gram language model score to estimate the fluency of the hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypothesis Filtering as Binary Classification",
"sec_num": "3.2"
},
{
"text": "We optimize a Soft Macro-F1 objective (Hsieh et al., 2018) function to approximate the official evaluation metric. 2 The true positive (tp), false positive(fp), and true negative (tn) rate for each source prompt e i are estimated as:",
"cite_spans": [
{
"start": 38,
"end": 58,
"text": "(Hsieh et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 115,
"end": 116,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Loss",
"sec_num": null
},
{
"text": "tp e i = N t=1\u0177 i \u00d7 y i fp e i = N t=1\u0177 i \u00d7 (1 \u2212 y i ) tn e i = N t=1 (1 \u2212\u0177 i ) \u00d7 y i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss",
"sec_num": null
},
{
"text": "Then, the precision, recall, F1 for a source e i , and the loss are defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss",
"sec_num": null
},
{
"text": "P e i = tp e i tp e i + fp e i + R e i = tp e i tp e i + fn e i + F1 Macroe i = 2 \u00d7 P e i \u00d7 R e i P e i + R e i + Loss = M i=1 (1 \u2212 F1 Macroe i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss",
"sec_num": null
},
{
"text": "4 Experiment Settings 4.1 Data STAPLE Data The shared task provides English source prompts, associated with high-coverage sets Figure 1 : Average of the top-1, top-5, mean and median LRF values across source prompts: the LRF distribution is more uniform for languages with many more references per prompt (e.g. en-ja).",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 135,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Loss",
"sec_num": null
},
{
"text": "of plausible translations in five other languages. These translations are weighted and ranked according to LRF scores indicating which translations are more likely. About 3000 prompts per language are available (see Table 2 for details) and the number of reference translations available per prompt vary across languages (mean: 174.2, variance: 116). Figure 1 illustrates the differences in LRF distributions across languages: for languages with many references per prompt (e.g. en-ja, en-ko), the gap between the top-1 and the mean LRF value is small, indicating an almost uniform distribution. Average top-1 LRF scores also vary across languages (e.g en-vi: 0.25, en-ja: 0.05) depending upon the number of references available per prompt.",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 223,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 351,
"end": 357,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Loss",
"sec_num": null
},
{
"text": "For system development, we divide the STAPLE dataset into train, development and test datasets using 72%, 8%, and 20% of source prompts respectively. We refer to these subsets as STAPLE train, internal dev and internal test. Note that the last two differ from the official blind development and test sets available to participants on codalab.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss",
"sec_num": null
},
{
"text": "Other Bitexts We use bitext from OpenSubtitles (Tiedemann, 2012) and Tatoeba (Tiedemann, 2012) as described in Table 3 . The Tatoeba corpus provides multiple reference translations for some sources (with 2 translation per source on average), but unlike in the STAPLE data, these translations are not weighted by frequency of usage.",
"cite_spans": [
{
"start": 47,
"end": 64,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF22"
},
{
"start": 77,
"end": 94,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Loss",
"sec_num": null
},
{
"text": "Preprocessing All datasets are pre-processed using Moses tools for normalization, tokenization and lowercasing. We further segment tokens into subwords using a joint source-target Byte Pair Encoding (Sennrich et al., 2016c) operations. For Japanese, we use kytea 3 toolkit for word tokenization.",
"cite_spans": [
{
"start": 199,
"end": 223,
"text": "(Sennrich et al., 2016c)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Loss",
"sec_num": null
},
{
"text": "Model Architecture We use the Transformer model implemented in the Sockeye toolkit as a baseline MT system. Both encoder and decoder are 6-layer Transformer models with model size of 1, 024, feed-forward network size of 4, 096, and 16 attention heads. We adopt label smoothing and weight tying. We tie the output weight matrix with the target embeddings. We use Adam optimizer with initial learning rate of 0.0002.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT configurations",
"sec_num": "4.2"
},
{
"text": "We train several models with the above configuration:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": null
},
{
"text": "\u2022 OpenSubs a baseline model trained and validated on the OpenSubtitles bitext.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": null
},
{
"text": "\u2022 Unweighted builds on the baseline by finetuning on multi-reference samples including the Tatoeba bitext and STAPLE train. We create one training sample per sourcereference pair, and the resulting samples are not weighted. We use the internal dev set (1best reference only) as a validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": null
},
{
"text": "\u2022 Frequency-Aware is fine-tuned as the unweighted model except that STAPLE train is oversampled as described in \u00a7 3.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": null
},
{
"text": "We generate n-best list of translations for various models by running beam search with a beam size corresponding to the desired n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": null
},
{
"text": "Classifier The 2-layer feed-forward NN has 5 hidden units and 2 output units. It is trained with the Adam optimizer with an initial learning rate of 0.001 and runs for 2000 epochs on the internal dev set. The best model is selected based on internal test set performance. We consider two losses: the soft macro F1 loss which approximates the official evaluation metric ( \u00a7 3.2) and the standard crossentropy loss as a baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering configurations",
"sec_num": "4.3"
},
{
"text": "We compare our NN based classifer with a standard MT n-best list reranker trained on the internal dev set. We use the n-best batch MIRA ranker (Cherry and Foster, 2012) included in Moses. A threshold to filter candidates in the reranked list is selected by maximizing the Weighted Macro F1 on the internal dev dataset.",
"cite_spans": [
{
"start": 143,
"end": 168,
"text": "(Cherry and Foster, 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reranking Baseline",
"sec_num": null
},
{
"text": "We use eflomal 5 trained on the Opensubtitles dataset to obtain word alignment between source and translation hypotheses. The language model is trained with the kenlm (Heafield, 2011) toolkit with default hyper-parameters 6 on the target side of the Opensubtitles and the STAPLE dataset. The Right-to-left and Target-to-source MT models were trained on OpenSubtitles (same configuration as in \u00a7 4.2).",
"cite_spans": [
{
"start": 167,
"end": 183,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "We evaluate the lowercased detokenized output of the systems on our internal test dataset using:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Weighted Macro F1 This is the official scoring metric which quantifies how the set of system outputs covers the human-curated acceptable translations, weighted by the LRF of each translation. It is defined as the harmonic mean of unweighted precision (P) and weighted recall (WR) calculated for each prompt e i , and averaged over all the prompts in the corpus. Specifically, using the same notation as introduced in \u00a7 3.1, for each translation T i generated by the MT model, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "WTP e i = t\u2208T i f j i \u2208F i 1[t == f j i ]w j i WFN e i = f j i \u2208T i w j i WR e i = WTP e i WTP e i + WFN e i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The weighted Macro F1 (WMF1) is then given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "WMF1 e i = 2 \u00d7 P e i \u00d7 WR e i P e i + WR e i WMF1 = 1 M M i WMF1 e i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "We also report the translation quality of the 1-best neural MT output compared against the highest LRF reference translation using the standard BLEU metric (Papineni et al., 2002) . Table 4 summarizes the evaluation of n-best lists obtained with our neural MT systems.",
"cite_spans": [
{
"start": 156,
"end": 179,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 182,
"end": 189,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "BLEU@1",
"sec_num": null
},
{
"text": "Baselines We confirm that the neural MT configuration is sound by comparing our neural MT baseline to the provided AWS system. Our baseline (\"OpenSubs\") improves the BLEU@1 score by 2 points for en-pt, and remains 6 points lower for envi, as can be expected given the smaller size of the OpenSubtitles training set. However, the \"Open-Subs\" n-best lists improve over the AWS baseline according to the official task metric (WMF1), establishing that this system is a good starting point for fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Frequency-Aware Fine-Tuning",
"sec_num": "6.1"
},
{
"text": "Fine-Tuning The Frequency-Aware n-best hypotheses consistently yield the best Weighted Recall and Weighted Macro-F1 scores for all languages. The improvement in recall and therefore F1 score is largest for en-ja and en-ko which have larger translation reference sets (Table 4) . Frequency-Aware oversampling also improves precision over the Unweighted model for all but one language (en-pt). The impact on the auxiliary BLEU@1 metric is less consistent: the Frequency-Aware system achieves the best BLEU@1 in 3 out of 5 languages, but outperforms the OpenSubs baseline in 4 out of 5. BLEU@1 drops when finetuning on all the samples without weighting (Unweighted) which we attribute to the increased uncertainty during training as the model is exposed to many different translations for the same source English text. Overall, these results show the benefits of finetuning on task-relevant data and shows that incorporating LRF weights via oversampling improves the ranking of n-best hypotheses. This is further illustrated in Table 5 , which shows the top 10 Vietnamese translations for two randomly sampled source prompts: the Frequency-Aware n-best list yields Weighted Recall of 81% at a Precision of 60% and 76% at a Precision of 100% for the two source prompts respectively, illustrating that the model generates high-quality candidates that cover reference translations well, but not perfectly.",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 276,
"text": "(Table 4)",
"ref_id": "TABREF5"
},
{
"start": 1025,
"end": 1032,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Impact of Frequency-Aware Fine-Tuning",
"sec_num": "6.1"
},
{
"text": "How well do n-best translations cover the space of reference learner translations? Figure 2 shows the impact of increasing the decoding beam (and resulting n-best list size) from 10 to 500 for the Frequency-Aware model. For en-pt, while weighted recall increases up to 66%, the drop in precision hurts the weighted F1 score. The oracle F1 score, which represents the Weighted Macro F1 at a Precision of 100%, also increases gradually, reaching a score of 76%. This suggests that the raw n-best lists contain many useful translation candidates but need to be filtered down to better match translations preferred by language learners.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 91,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "N-Best List Quality",
"sec_num": null
},
{
"text": "Due to time constraints, we explore the impact of hypothesis filtering only for en-pt and en-vi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Hypothesis Filtering",
"sec_num": "6.2"
},
{
"text": "Weighted Macro F1 ( as the loss leads to a better balance between Precision and Weighted Recall than cross-entropy. The classifier outperforms the MIRA reranker. Since the reranker is trained to maximize BLEU@1, it tends to prefer candidates that are lexically similar to the top reference translation and misses some of the more diverse learner translations. This confirms the benefits of framing the selection of candidate hypothesis as binary classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering consistently improves Precision and",
"sec_num": null
},
{
"text": "Ablation Experiments show that the MT scores are the most useful of the features used, as they capture not only the generation probability of a candidate hypothesis but estimate adequacy via the Target-to-source neural MT model (Table 8) . Length features help precision but not recall, while the alignment and language model scores have little impact overall. This suggests that the classifier could benefit from improved feature design and selection in future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 237,
"text": "(Table 8)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Filtering consistently improves Precision and",
"sec_num": null
},
{
"text": "How diverse are the translations returned by various system configurations? Following Zhang et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Translation Diversity",
"sec_num": "6.3"
},
{
"text": ", we quantify diversity using the entropy of k-gram distributions within a translation set:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Translation Diversity",
"sec_num": "6.3"
},
{
"text": "Ent-k (V) = \u2212 1 w F (w) w\u2208V F (w) log F (w) w F (w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Translation Diversity",
"sec_num": "6.3"
},
{
"text": "where V is the set of all k-grams that appear in the translation set, and F (w) denotes the frequency of w in the translations. The higher the Ent-k score, the greater the diversity. Fine-tuned models improve the diversity of 10best lists compared to the \"OpenSubs\" baseline for both en-vi and en-pt (Table 9) . Overall filtering bridges 40% and 25% of the gap between baseline and reference learner translations for en-pt and envi respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 300,
"end": 309,
"text": "(Table 9)",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Analysis of Translation Diversity",
"sec_num": "6.3"
},
{
"text": "A manual examination of translation sets returned by different models suggest that they make complementary errors. We therefore consider combining system outputs by taking the union of the set of translations they return. We evaluate the following combinations (Table 7) : C1 Frequency-aware (10-best) + Unweighted (10best) Figure 2 : Increasing the size of n-best list with the Frequency-Aware system improves the coverage of learner translations for en-pt and en-vi. Oracle F1 is the Weighted Macro F1 at a Precision of 100% and represents the upper bound on WMF1 that can be achieved for a given n-best list. C2 Frequency-aware (10-best) + Frequencyaware (filtered 50-best)",
"cite_spans": [],
"ref_spans": [
{
"start": 261,
"end": 270,
"text": "(Table 7)",
"ref_id": "TABREF10"
},
{
"start": 324,
"end": 332,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Combinations",
"sec_num": "6.4"
},
{
"text": "C3 Unweighted (10-best) + Unweighted (filtered 50-best)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Combinations",
"sec_num": "6.4"
},
{
"text": "C4 Union of all of the above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Combinations",
"sec_num": "6.4"
},
{
"text": "For en-pt and en-vi, it helps to combine higher precision unfiltered 10-best lists, and higher recall filtered 50-best lists. For en-pt, the union of all outputs (C4) performs best overall. Recall increases when combining the Frequency-Aware and the Unweighted model (C1) compared to individual lists (Unweighted: +1.6, Frequency-Aware: +2) without compromising Precision. Similar trends are observed when adding the filtered 50-best list to unfiltered 10-best lists (C2: +2.2, C3: +4.8). For en-vi, a different combination (C2) yields the best result, perhaps due to the smaller set of reference translations per source prompt (en-vi: 56, en-pt: 131) and high Precision of the \"Unweighted\" model for en-pt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Combinations",
"sec_num": "6.4"
},
{
"text": "We tested our systems on the official blind development set to select the best performing models for final evaluation on the test set. For Portuguese and Vietnamese, our official submissions include frequency-aware hypothesis generation and hypothesis filtering: en-vi C2: Frequency-aware (10-best) + Frequencyaware (filtered 50-best) Table 8 : Impact of dropping one feature type ( \u00a7 3.2) at a time from the \"All\" configuration for en-vi classifier. en-pt C4: Frequency-aware (10-best) + Frequencyaware (filtered 50-best) + Unweighted (10best) + Unweighted (filtered 50-best)",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 342,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Submitted Systems",
"sec_num": "7"
},
{
"text": "We did not build hypothesis filtering models for the other languages, and submitted systems based only on unfiltered models: en-ja Frequency-aware (50-best) + Unweighted (50best)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Systems",
"sec_num": "7"
},
{
"text": "en-hu Frequency-aware (10-best) + Unweighted (10best)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Systems",
"sec_num": "7"
},
{
"text": "en-ko Frequency-aware (50-best) + Unweighted (50best) Table 10 and 11 compares our submissions to baselines, as well as top and median submissions across participants, for all the languages. On our focus languages (en-pt and en-vi), where systems benefitted from both frequency-aware generation and filtering models, our submissions obtain a Weighted Macro F1 score of 0.539 for en-vi and 0.525 for en-pt on the official test set, achieving a rank of 2 nd and 4 th on the leader-board, within 2% of the top performing submission. On the other language pairs, where our submissions did not use any filtering, Weighted Macro F1 outperform the baselines and median submission consistently. Interestingly on the en-ja task, our system ranks second amongst all the submissions despite not using any filtering. ",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Table 10",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Submitted Systems",
"sec_num": "7"
},
{
"text": "We proposed two strategies to obtain multiple outputs that mimic translations by produced by language learners from a standard neural MT model. Our experiments showed that (1) finetuning MT models using all reference translations and their weight yields more diverse n-best hypotheses that better reflect learner preferences, and (2) filtering these n-best lists using a feature-rich classifier trained to maximize an approximation of the STAPLE evaluation metric yields further improvements. Combinations of systems that use these two strategies approach the top scoring submission in the official evaluation. While these results suggest that some degree of output diversity can be achieved with little change to core neural MT models, oracle scores obtained with unfiltered n-best lists indicate that better modeling the space of learner translations might benefit both candidate generation and the filtering model in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We set O = 1000 in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Preliminary experiments showed that a LRF-weighted version of this loss resulted in unstable training and inconsistent results depending on initialization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/neubig/kytea 4 https://github.com/awslabs/sockeye",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/robertostling/ eflomal 6 https://github.com/kpu/kenlm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Controlling text complexity in neural machine translation",
"authors": [
{
"first": "Sweta",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1549--1564",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Sweta Agrawal and Marine Carpuat. 2019. Control- ling text complexity in neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1549- 1564, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages 65-72.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Batch tuning strategies for statistical machine translation",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "427--436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and George Foster. 2012. Batch tun- ing strategies for statistical machine translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 427-436. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Noisy parallel approximate decoding for conditional recurrent language model. CoRR",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho. 2016. Noisy parallel approximate decoding for conditional recurrent language model. CoRR, abs/1605.03835.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Achieving human parity on automatic chinese to english news translation",
"authors": [
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Aue",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Vishal",
"middle": [],
"last": "Chowdhary",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Xuedong",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Renqian",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Arul",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Lijun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shuangzhi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Dongdong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhirui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on auto- matic chinese to english news translation. CoRR, abs/1803.05567.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "KenLM: Faster and smaller language model queries",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187-197, Edinburgh, Scotland. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A deep model with local surrogate loss for general cost-sensitive multi-label learning",
"authors": [
{
"first": "Cheng-Yu",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Yi-An",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Hsuan-Tien",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng-Yu Hsieh, Yi-An Lin, and Hsuan-Tien Lin. 2018. A deep model with local surrogate loss for general cost-sensitive multi-label learning. In Thirty-Second AAAI Conference on Artificial Intel- ligence.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Controlling output length in neural encoder-decoders",
"authors": [
{
"first": "Yuta",
"middle": [],
"last": "Kikuchi",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Ryohei",
"middle": [],
"last": "Sasano",
"suffix": ""
},
{
"first": "Hiroya",
"middle": [],
"last": "Takamura",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Okumura",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1328--1338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hi- roya Takamura, and Manabu Okumura. 2016. Con- trolling output length in neural encoder-decoders. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 1328-1338.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. CoRR, abs/1510.03055.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mutual information and diverse decoding improve neural machine translation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Dan Jurafsky. 2016. Mutual information and diverse decoding improve neural machine trans- lation. CoRR, abs/1601.00372.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Simultaneous translation and paraphrase for language education",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Mayhew",
"suffix": ""
},
{
"first": "Klinton",
"middle": [],
"last": "Bicknell",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brust",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Mcdowell",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Mayhew, Klinton Bicknell, Chris Brust, Bill McDowell, Will Monroe, and Burr Settles. 2020. Si- multaneous translation and paraphrase for language education. In Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving machine translation performance by exploiting non-parallel corpora",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Dragos",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Munteanu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "4",
"pages": "477--504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dragos Stefan Munteanu and Daniel Marcu. 2005. Im- proving machine translation performance by exploit- ing non-parallel corpora. Computational Linguis- tics, 31(4):477-504.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural reranking improves subjective quality of machine translation: Naist at wat2015",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Morishita",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Asian Translation (WAT2015)",
"volume": "",
"issue": "",
"pages": "35--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Makoto Morishita, and Satoshi Naka- mura. 2015. Neural reranking improves subjective quality of machine translation: Naist at wat2015. In Proceedings of the 2nd Workshop on Asian Transla- tion (WAT2015), pages 35-41.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multi-Task Neural Models for Translating Between Styles Within and Across Languages",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Sudha",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2018,
"venue": "27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Niu, Sudha Rao, and Marine Carpuat. 2018. Multi-Task Neural Models for Translating Between Styles Within and Across Languages. In 27th Inter- national Conference on Computational Linguistics (COLING 2018).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Truly exploring multiple references for machine translation evaluation",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 18th Annual Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Qin and Lucia Specia. 2015. Truly exploring multiple references for machine translation evalua- tion. In Proceedings of the 18th Annual Conference of the European Association for Machine Transla- tion, pages 113-120, Antalya, Turkey.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Personalized Machine Translation: Preserving Original Author Traits",
"authors": [
{
"first": "Ella",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Shachar",
"middle": [],
"last": "Mirkin",
"suffix": ""
},
{
"first": "Raj",
"middle": [
"Nath"
],
"last": "Patel",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.05461"
]
},
"num": null,
"urls": [],
"raw_text": "Ella Rabinovich, Shachar Mirkin, Raj Nath Patel, Lu- cia Specia, and Shuly Wintner. 2016. Personal- ized Machine Translation: Preserving Original Au- thor Traits. arXiv:1610.05461 [cs].",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Controlling Politeness in Neural Machine Translation via Side Constraints",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "35--40",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1005"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Controlling Politeness in Neural Machine Translation via Side Constraints. pages 35-40. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural Machine Translation of Rare Words with Subword Units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. Proceedings of the Meet- ing of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016c. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Mixture models for diverse machine translation: Tricks of the trade",
"authors": [
{
"first": "Tianxiao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. CoRR, abs/1902.07816.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Generating diverse translations with sentence codes",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1823--1827",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Shu, Hideki Nakayama, and Kyunghyun Cho. 2019. Generating diverse translations with sentence codes. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1823-1827.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Parallel data, tools and interfaces in opus",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Lrec",
"volume": "2012",
"issue": "",
"pages": "2214--2218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Lrec, volume 2012, pages 2214- 2218.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Identifying semantic divergences in parallel text without annotations",
"authors": [
{
"first": "Yogarshi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1503--1515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yogarshi Vyas, Xing Niu, and Marine Carpuat. 2018. Identifying semantic divergences in parallel text without annotations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1503-1515. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Generating informative and diverse conversational responses via adversarial information maximization",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1810--1820",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In Advances in Neural Information Processing Sys- tems, pages 1810-1820.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multi-reference training with pseudo-references for neural translation and text generation",
"authors": [
{
"first": "Renjie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3188--3197",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1357"
]
},
"num": null,
"urls": [],
"raw_text": "Renjie Zheng, Mingbo Ma, and Liang Huang. 2018. Multi-reference training with pseudo-references for neural translation and text generation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3188-3197, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"content": "<table/>",
"text": "STAPLE data: given a prompt in English, translation alternatives are weighted according to Learner Response Frequency (LRF)",
"num": null,
"html": null
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Language</td><td colspan=\"2\">Source Train Dev Test Types Tokens Train Dev</td><td colspan=\"3\">Target Test Types Tokens</td><td>T/S</td></tr><tr><td>en-pt</td><td>2.8K 300 800 2.3K</td><td colspan=\"3\">3.8M 380K 42K 104K 8.7K</td><td colspan=\"2\">4M 131</td></tr><tr><td>en-vi</td><td>2.5K 280 700 2.3K</td><td>950K 142K 14K</td><td colspan=\"2\">38K 1.7K</td><td>1.3M</td><td>56</td></tr><tr><td>en-ja</td><td>1.8K 200 500 1.3K</td><td colspan=\"2\">3.8M 600K 65K 166K</td><td>4K</td><td colspan=\"2\">6.8M 342</td></tr><tr><td>en-ko</td><td>1.8K 200 500 1.3K</td><td colspan=\"2\">3M 500K 57K 137K</td><td>17K</td><td colspan=\"2\">2.6M 280</td></tr><tr><td>en-hu</td><td>2.8K 320 800 1.5K</td><td>1.1M 182K 21K</td><td>47K</td><td>11K</td><td>1M</td><td>62</td></tr></table>",
"text": "model with 32, 000",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Language OpenSubtitles Tatoeba</td></tr><tr><td>en-pt</td><td>47.2M</td><td>196K</td></tr><tr><td>en-vi</td><td>3M</td><td>5.3K</td></tr><tr><td>en-ja</td><td>1.8M</td><td>200K</td></tr><tr><td>en-ko</td><td>1.2M</td><td>2.7K</td></tr><tr><td>en-hu</td><td>34.5M</td><td>102K</td></tr></table>",
"text": "STAPLE data statistics: segments in our train/dev/test split, overall vocabulary statistics and average translations per source prompt (T/S).",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"text": "Additional bitext used for training and finetuning MT models",
"num": null,
"html": null
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Language Method</td><td colspan=\"2\">BLEU@1 n-best size</td><td>P</td><td colspan=\"2\">WR WMF1</td></tr><tr><td/><td>AWS</td><td>68.9</td><td>1</td><td colspan=\"2\">86.67 14.47</td><td>21.60</td></tr><tr><td>en-pt</td><td>OpenSubs Unweighted</td><td>70.9 61.5</td><td>10 10</td><td colspan=\"2\">49.66 39.18 72.69 40.58</td><td>37.39 46.11</td></tr><tr><td/><td>Frequency-Aware</td><td>76.6</td><td>10</td><td colspan=\"2\">67.31 44.34</td><td>47.4</td></tr><tr><td/><td>AWS</td><td>61.4</td><td>1</td><td colspan=\"2\">65.09 13.32</td><td>19.57</td></tr><tr><td>en-vi</td><td>OpenSubs Unweighted</td><td>55.2 49.8</td><td>10 10</td><td colspan=\"2\">29.10 31.38 56.43 42.91</td><td>25.76 41.00</td></tr><tr><td/><td>Frequency-Aware</td><td>71.9</td><td>10</td><td colspan=\"3\">61.61 54.37 51.87</td></tr><tr><td/><td>AWS</td><td>50.6</td><td>1</td><td>67.68</td><td>2.18</td><td>4.01</td></tr><tr><td>en-ja</td><td>OpenSubs Unweighted</td><td>32.7 30.1</td><td>50 50</td><td colspan=\"2\">2.94 45.71 21.21 3.47</td><td>2.52 24.88</td></tr><tr><td/><td>Frequency-Aware</td><td>42.4</td><td>50</td><td colspan=\"3\">47.29 22.83 26.57</td></tr><tr><td/><td>AWS</td><td>63.4</td><td>1</td><td colspan=\"2\">83.70 18.12</td><td>27.12</td></tr><tr><td>en-hu</td><td>OpenSubs Unweighted</td><td>64.4 26.2</td><td>10 10</td><td>41.51 47.11</td><td>42.6 29.7</td><td>37.83 31.62</td></tr><tr><td/><td>Frequency-Aware</td><td>51.4</td><td>10</td><td colspan=\"3\">52.22 41.05 41.69</td></tr><tr><td/><td>AWS</td><td>27.9</td><td>1</td><td>60.68</td><td>2.26</td><td>4.11</td></tr><tr><td>en-ko</td><td>OpenSubs Unweighted</td><td>9.2 14.8</td><td>50 50</td><td>12.53 33.82</td><td>7.41 18.8</td><td>7.20 19.78</td></tr><tr><td/><td>Frequency-Aware</td><td>30.2</td><td>50</td><td colspan=\"3\">35.31 20.92 21.94</td></tr></table>",
"text": ". The binary classifier that optimizes Soft Macro-F1 performs best,",
"num": null,
"html": null
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"text": "Frequency-Aware systems outperform both OpenSubs and Unweighted models for all languages. The size of the n-best list for each model was selected based on the WMF1 score on the internal test set.",
"num": null,
"html": null
},
"TABREF7": {
"type_str": "table",
"content": "<table/>",
"text": "Frequency-Aware 10-best Vietnamese output for two randomly selected English prompts. LRF values are given for translations found in the reference set.",
"num": null,
"html": null
},
"TABREF9": {
"type_str": "table",
"content": "<table><tr><td>Method</td><td/><td>en-pt</td><td/><td>en-vi</td></tr><tr><td/><td>P</td><td>R WR WMF1</td><td>P</td><td>R WR WMF1</td></tr><tr><td>Unweighted (10-best)</td><td colspan=\"4\">72.69 5.53 40.58 46.11 56.43 10.32 42.19 41.00</td></tr><tr><td>Unweighted (filtered 50-best)</td><td colspan=\"4\">67.81 9.68 45.71 48.17 63.14 15.23 54.35 51.48</td></tr><tr><td>Frequency-Aware (10-best)</td><td colspan=\"4\">67.31 5.07 44.34 47.40 61.61 11.28 54.37 51.87</td></tr><tr><td>Frequency-</td><td/><td/><td/></tr></table>",
"text": "Filtering n-best lists consistently improves WMF1 and substantially reduces the size of the output set (K) Aware (filtered 50-best) 64.33 6.40 36.94 41.44 65.15 15.33 55.21 53.69 C1 65.09 7.13 47.52 49.31 55.04 14.82 57.57 50.73 C2 64.33 7.30 48.67 48.81 60.41 16.07 60.19 53.57 C3 66.41 10.32 50.18 50.17 56.19 15.93 54.89 48.05 C4 59.76 11.60 53.56 50.79 53.75 18.31 61.04 50.78",
"num": null,
"html": null
},
"TABREF10": {
"type_str": "table",
"content": "<table><tr><td>Features</td><td>P</td><td>R</td><td>WR WMF1</td></tr><tr><td>All</td><td colspan=\"3\">63.71 15.97 55.91 54.04</td></tr><tr><td colspan=\"4\">-LM score 65.10 15.35 55.62 53.92</td></tr><tr><td colspan=\"4\">-Alignment 65.21 15.27 55.08 53.86</td></tr><tr><td>-Length</td><td colspan=\"3\">58.44 16.49 55.98 52.53</td></tr><tr><td colspan=\"4\">-MT Scores 43.77 10.88 31.02 28.06</td></tr><tr><td>Oracle</td><td colspan=\"3\">100 28.28 70.31 77.90</td></tr></table>",
"text": "Combination of unfiltered 10-best lists (with better precision) and filtered 50-best lists (with better recall) improves Weighted Macro F1. See \u00a7 6.4 for details on combinations.",
"num": null,
"html": null
},
"TABREF12": {
"type_str": "table",
"content": "<table/>",
"text": "Diversity in translation sets: Filtered sets are more diverse, bridging 40% of the gap between baseline and reference translations for en-pt.",
"num": null,
"html": null
},
"TABREF13": {
"type_str": "table",
"content": "<table><tr><td colspan=\"6\">Method en-vi en-pt en-ja en-hu en-ko</td></tr><tr><td>AWS</td><td colspan=\"5\">0.198 0.213 0.043 0.281 0.041</td></tr><tr><td colspan=\"6\">Fairseq 0.254 0.136 0.033 0.124 0.049</td></tr><tr><td colspan=\"6\">Median 0.377 0.436 0.239 0.452 0.230</td></tr><tr><td>Top</td><td colspan=\"5\">0.558 0.552 0.318 0.555 0.404</td></tr><tr><td>Ours</td><td colspan=\"5\">0.539 0.525 0.294 0.469 0.255</td></tr><tr><td>Rank</td><td>2 nd</td><td>4 th</td><td>2 nd</td><td>3 rd</td><td>3 rd</td></tr></table>",
"text": "Excerpt from official results: weighted Macro F1 on the STAPLE dev set",
"num": null,
"html": null
},
"TABREF14": {
"type_str": "table",
"content": "<table/>",
"text": "Excerpt from official results: weighted Macro F1 on the STAPLE test set",
"num": null,
"html": null
}
}
}
}