{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:06:44.731103Z" }, "title": "Simultaneous Translation and Paraphrase for Language Education", "authors": [ { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "", "affiliation": {}, "email": "stephen@duolingo.com" }, { "first": "Klinton", "middle": [], "last": "Bicknell", "suffix": "", "affiliation": {}, "email": "klinton@duolingo.com" }, { "first": "Chris", "middle": [], "last": "Brust", "suffix": "", "affiliation": {}, "email": "chrisb@duolingo.com" }, { "first": "Bill", "middle": [], "last": "Mcdowell", "suffix": "", "affiliation": {}, "email": "mcdowell@duolingo.com" }, { "first": "Will", "middle": [], "last": "Monroe", "suffix": "", "affiliation": {}, "email": "monroe@duolingo.com" }, { "first": "Burr", "middle": [], "last": "Settles", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present the task of Simultaneous Translation and Paraphrasing for Language Education (STAPLE). Given a prompt in one language, the goal is to generate a diverse set of correct translations that language learners are likely to produce. This is motivated by the need to create and maintain large, high-quality sets of acceptable translations for exercises in a language-learning application, and synthesizes work spanning machine translation, MT evaluation, automatic paraphrasing, and language education technology. We developed a novel corpus with unique properties for five languages (Hungarian, Japanese, Korean, Portuguese, and Vietnamese), and report on the results of a shared task challenge which attracted 20 teams to solve the task. In our meta-analysis, we focus on three aspects of the resulting systems: external training corpus selection, model architecture and training decisions, and decoding and filtering strategies. We find that strong systems start with a large amount of generic training data, and then finetune with in-domain data, sampled according to our provided learner response frequencies.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present the task of Simultaneous Translation and Paraphrasing for Language Education (STAPLE). Given a prompt in one language, the goal is to generate a diverse set of correct translations that language learners are likely to produce. This is motivated by the need to create and maintain large, high-quality sets of acceptable translations for exercises in a language-learning application, and synthesizes work spanning machine translation, MT evaluation, automatic paraphrasing, and language education technology. We developed a novel corpus with unique properties for five languages (Hungarian, Japanese, Korean, Portuguese, and Vietnamese), and report on the results of a shared task challenge which attracted 20 teams to solve the task. In our meta-analysis, we focus on three aspects of the resulting systems: external training corpus selection, model architecture and training decisions, and decoding and filtering strategies. We find that strong systems start with a large amount of generic training data, and then finetune with in-domain data, sampled according to our provided learner response frequencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine translation systems are typically trained to produce a single output, but in certain cases, it is desirable to have many possible translations of a given input text. For example, Duolingo-the world's largest language-learning platform-uses translation-based exercises for some of its lessons. For any given translation prompt there may be hundreds or thousands of valid responses, so we use a set of human-curated translations in order to grade learner responses. The manual process of maintaining these sets is laborious, and we believe it can be improved with the aid of rich multi-output translation and paraphrase systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Reference Translation a minha explica\u00e7\u00e3o est\u00e1 clara?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prompt is my explanation clear?", "sec_num": null }, { "text": "Weight minha explica\u00e7\u00e3o est\u00e1 clara?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accepted Translations", "sec_num": null }, { "text": ".267 minha explica\u00e7\u00e3o \u00e9 clara?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accepted Translations", "sec_num": null }, { "text": ".162 a minha explica\u00e7\u00e3o est\u00e1 clara?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accepted Translations", "sec_num": null }, { "text": ".111 a minha explica\u00e7\u00e3o \u00e9 clara?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accepted Translations", "sec_num": null }, { "text": ".088 minha explana\u00e7\u00e3o est\u00e1 clara?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accepted Translations", "sec_num": null }, { "text": ".057 est\u00e1 clara minha explica\u00e7\u00e3o?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accepted Translations", "sec_num": null }, { "text": ".044 minha explana\u00e7\u00e3o \u00e9 clara?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accepted Translations", "sec_num": null }, { "text": ".039 a minha explana\u00e7\u00e3o est\u00e1 clara?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accepted Translations", "sec_num": null }, { "text": ".036 ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accepted Translations", "sec_num": null }, { "text": "... Table 1 : An example from the Portuguese dataset. In this task, teams are given an English prompt and a reference translation, and are required to produce as many variants in the accepted translations as possible. The evaluation favors translations with higher weight, which is a measure of learner response frequency.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 11, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Accepted Translations", "sec_num": null }, { "text": "To this end, we introduce a new task called Simultaneous Translation and Paraphrasing for Language Education (STAPLE). From the perspective of the research community, we believe this poses an interesting exercise that is similar to machine translation (MT), but also provides data with new and unique properties that we expect to be of interest to researchers in MT evaluation, multilingual paraphrasing, and even language education technology. It is our hope that this new task can help synthesize efforts from these various subfields to further the state of the art, and broaden their applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accepted Translations", "sec_num": null }, { "text": "For the STAPLE task, participants begin with English prompts and generate high-coverage sets of plausible translations in five different languages. For training and evaluation, each prompt is paired with a relatively comprehensive set of handcrafted, field-tested accepted translations, each weighted and ranked according to their empirical frequency among Duolingo learners. We also provide a highquality automatic reference translation of each prompt that may (optionally) be used as a reference or anchor point, in the event that researchers want to explore paraphrase-only approaches (this also serves as a strong baseline). See Table 1 for an example from the Portuguese dataset.", "cite_spans": [], "ref_spans": [ { "start": 633, "end": 640, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Shared Task Description", "sec_num": "2" }, { "text": "Data for the task are derived from Duolingo, a free, award-winning, online language-learning platform. Since launching in 2012, hundreds of millions of learners worldwide have enrolled in Duolingo's game-like courses via the website 1 or mobile apps. Learning happens through a variety of interactive exercise types, combining reading, writing, listening, and speaking activities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Collection", "sec_num": "2.1" }, { "text": "One such format is a translation exerciseshown in Figure 1 -in which the learner is shown a prompt in one language, and asked to translate it into the other. Since English is by far the most popular language to learn on Duolingo, we created a task corpus by sampling prompts from English courses, in which users are shown an English sentence, and then asked to translate it into a language they already know. For instance, the examples in Figure 1 come from the course for Portuguese speakers learning English.", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 58, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 439, "end": 447, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Corpus Collection", "sec_num": "2.1" }, { "text": "Naturally, some prompts have more accepted translations (valid learner responses) than others, depending on such factors as polysemy, synonymy, or prompt length. We filtered out prompts for which the number of accepted translations was in the top or bottom deciles of a course, to avoid outliers. Although each accepted translation is technically correct, usually a small number of them are considered most fluent or idiomatic. To estimate this distribution empirically, we gathered learner response data from October-November 2019. For each translation, we counted the number of times that learners produced that translation (with some allowances for punctuation and capitalization).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Collection", "sec_num": "2.1" }, { "text": "This provided a count c t for each translation t in the set of accepted translations A. Since many translations were never attested in learner data, we then smoothed and normalized these counts to produce a learner response frequency (LRF) weight w t for each translation, such that they sum to 1 for each prompt:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Collection", "sec_num": "2.1" }, { "text": "w t = \u221a c t + 1 t \u2208A \u221a c t + 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Collection", "sec_num": "2.1" }, { "text": "These weights are a unique feature of the STAPLE corpus, and found in almost no other datasets. Having gathered prompts from each course, we shuffled the prompt set and selected 500 prompts for development and 500 for test. Of the remaining prompts for each course, we created a training set by sampling according to course size, so smaller courses (e.g., Vietnamese) have fewer prompts. Statistics on the datasets can be found in Table 2 . Table 2 : Dataset sizes by number of prompt sentences, and total number of accepted translations.", "cite_spans": [], "ref_spans": [ { "start": 431, "end": 438, "text": "Table 2", "ref_id": null }, { "start": 441, "end": 448, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Corpus Collection", "sec_num": "2.1" }, { "text": "We provide data for translating English prompts into five languages: Hungarian, Japanese, Korean, Portuguese (Brazilian), and Vietnamese. These span five different language families, three different writing systems, and represent a wide variety of popular Duolingo courses. For example, as of this writing, English from Portuguese is the fourth-largest Duolingo course overall, whereas English from Korean is median-sized, with the others falling in between. As such, much effort has gone into developing their accepted translation sets, but there is probably still room for improvement. These five languages also vary widely in their status as high-to-low-resource languages in NLP research. For the shared task, participants were allowed to submit results to any or all of these language tracks. Furthermore, there were no restrictions on the use of external data; teams were encouraged to use any available monolingual or parallel corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Five Language Tracks", "sec_num": "2.2" }, { "text": "The main scoring metric is (macro) weighted F1 with respect to the accepted translations. In short, systems are scored based on how well they can return all human-curated accepted translations, but with lower penalties on recall for failing to produce translations that learners rarely submit anyway.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "For each prompt sentence s with accepted translation set A s in the corpus, we evaluate the weighted recall of a system's predicted translation set P s as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "Weighted Recall(P s ) = t\u2208|Ps\u2229As| w t t\u2208|As| w t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "Precision is calculated in an unweighted fashion (as there is no weight for false positives), and weighted F1 for each P s is simply the usual harmonic mean of precision and weighted recall. These weighted F1s for each prompt are then averaged over the entire evaluation dataset D:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "(Macro) Weighted F1 = s\u2208D Weighted F1(P s ) |D|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "Since evaluation is done by matching predictions with accepted translations, we ignore any differences due to punctuation, capitalization, or multiple whitespaces.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "2.3" }, { "text": "We announced the shared task on December 20, 2019, with information about the task timeline, data, etc., published on a regular basis to a dedicated website 2 . We released the training data on January 15, blind dev data on March 2, and blind test data on March 30, 2020. During the blind dev phase, participants were able to submit up to five submissions per day to an online evaluation leaderboard. Originally, we had planned on closing the dev phase at the start of the test phase, but upon request, we kept it open so that teams could continue to experiment and submit to the dev leaderboard even after the test phase opened, without counting against their final submission(s). We allowed up to three submissions in total to the test leaderboard (to account for technical problems, etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenge Timeline", "sec_num": "2.4" }, { "text": "A total of 20 teams participated during the dev phase, 13 teams during the test phase, and 11 teams submitted system description papers. Of the teams with system descriptions, three of them (jbrem, sweagraw, jindra.helcl) participated in all five language tracks. One team (rakchada) submitted to two tracks, and the remaining teams only submitted to a single track, with Japanese and Portuguese being the most popular. .254 Table 3 : F1 results for all systems, on all languages. Rank is assigned according to statistical significance ( \u00a73).", "cite_spans": [], "ref_spans": [ { "start": 425, "end": 432, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "Official weighted F1 results are shown in Table 3 . Ranks are determined using an approximate permutation test with 100,000 samples (Pad\u00f3, 2006) , and adjacent-scoring systems are considered significantly different at p < .05. Figure 3 provides additional detail on precision and weighted recall. Overall, teams outperformed our provided baselines by a wide margin, and submissions tended to score higher on precision than weighted recall.", "cite_spans": [ { "start": 132, "end": 144, "text": "(Pad\u00f3, 2006)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 42, "end": 49, "text": "Table 3", "ref_id": null }, { "start": 227, "end": 235, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "We prepared two very different baselines. For base-line_aws, we used Amazon Translate 3 to generate a single \"best\" machine translation from English into the target language. These were also provided as reference translations at each phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.1" }, { "text": "For baseline_fairseq, we used the fairseq framework (Ott et al., 2019) trained solely on the STA-PLE task data. We created bitexts by pairing English prompts with each of their target language translations (making no use of the weights). The baseline employs a convolutional neural network (CNN) using byte-pair encoding (BPE) with a vocabulary size of 20,000, and simply outputs default n-best lists of size 10. While we ensured that the output BLEU scores of this model were sensible, we did not tune any parameters, instead treating this as a baseline that should be attainable by any team with minimal effort. Our baseline code was provided as a starting point for participants, and many chose to derive their systems from it. ", "cite_spans": [ { "start": 52, "end": 70, "text": "(Ott et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.1" }, { "text": "With few exceptions, participating teams followed the generalized pipeline illustrated in Figure 2 . This consists of (1) training a high-quality machine translation model using massive but mostly out-ofdomain corpora, (2) fine-tuning the model using STAPLE task corpora (and sometimes others), and then (3) employing various tricks for diverse output generation and filtering. jbrem (Khayrallah et al., 2020) took an approach involving score-based filtering of n-best lists, from a Transformer model pre-trained on large external corpora and then fine-tuned on the STAPLE data. The authors describe benefits from using various pre-training datasets, two different filtering methods, and various ways of upweighting of translations of high frequency (weight). The resulting system was among the strongest in the competition, ranking first in all five tracks.", "cite_spans": [ { "start": 384, "end": 409, "text": "(Khayrallah et al., 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 90, "end": 98, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Submitted Systems", "sec_num": "3.2" }, { "text": "nickeilf (Li et al., 2020 ) explored a family of diversification approaches including beam expansion, Monte Carlo random dropout, lexical substi- tution, and mixture of experts models, combined through ensemble-based consensus voting to generate a high quality set of translation suggestions. This tied for first place in the Portuguese track.", "cite_spans": [ { "start": 9, "end": 25, "text": "(Li et al., 2020", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Submitted Systems", "sec_num": "3.2" }, { "text": "rakchada (Chada, 2020) used pre-trained Transformer models fine-tuned on the STAPLE data with an oversampling trick that afforded more weight to translations with higher frequency. They then used a classifier to filter the n-best lists based on predicted learner frequency. This tied for first place in the Hungarian and Portuguese tracks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Submitted Systems", "sec_num": "3.2" }, { "text": "jspak3 (Park et al., 2020 ) took a similar approach to the original BART setup (Lewis et al., 2019) , except they fine-tuned the model not only on larger parallel corpora, but also on the STAPLE data. This ranked second in the Korean track.", "cite_spans": [ { "start": 7, "end": 25, "text": "(Park et al., 2020", "ref_id": "BIBREF21" }, { "start": 79, "end": 99, "text": "(Lewis et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Submitted Systems", "sec_num": "3.2" }, { "text": "sweagraw (Agrawal and Carpuat, 2020 ) used a Transformer model pre-trained on the OpenSubtitles corpus, then fine-tuned on Tatoeba and the STAPLE data ( \u00a74.1), with the STAPLE translations oversampled to capture frequency. Resulting n-best lists were filtered with a two-layer neural classifier optimized for a soft-F1 objective. This ranked second or third in all five language tracks.", "cite_spans": [ { "start": 9, "end": 35, "text": "(Agrawal and Carpuat, 2020", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Submitted Systems", "sec_num": "3.2" }, { "text": "masahiro (Kaneko et al., 2020 ) took a simple ensemble approach that requires no modification to an off-the-shelf NMT system (fairseq). The authors train multiple forward (L2R) and backward (R2L) models using different initial seeds, first by pretraining on general corpora and then fine-tuning on STAPLE data. Their experiments show that combining ensembling forward-backward models yields more diversity and higher F1 than simply using different seeds alone. This tied for second place in the Japanese track.", "cite_spans": [ { "start": 9, "end": 29, "text": "(Kaneko et al., 2020", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Submitted Systems", "sec_num": "3.2" }, { "text": "mzy (Yang et al., 2020) explored three particular strategies: pre-training on larger corpora before fine-tuning on in-domain corpora, using diverse beam search, and finally reranking candidate translations. The authors found that first fine-tuning on a similar intermediate corpus was better than finetuning on the STAPLE data alone. Diverse beam search provided modest further gains, although they report no improvement from beam re-ranking. This ranked third in the Japanese track.", "cite_spans": [ { "start": 4, "end": 23, "text": "(Yang et al., 2020)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Submitted Systems", "sec_num": "3.2" }, { "text": "dcu (Haque et al., 2020) compared both phrasebased and neural models by extending the STAPLE data with additional corpora (selected for similarity to the task data under a language model), with the neural model performing better. They generate sets of high-scoring predictions according to beam searches, majority voting, and other techniques, and also run these initial translations through an additional paraphrasing model, placing third in the Portuguese track.", "cite_spans": [ { "start": 4, "end": 24, "text": "(Haque et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Submitted Systems", "sec_num": "3.2" }, { "text": "jindra.helcl (Libovick\u00fd et al., 2020 ) trained a Transformer model by combining STAPLE data with additional parallel corpora and back-translated monolingual corpora. They also employed a filtering classifier that predicts whether their models' beam search outputs within accepted translations. This ranked third or fourth in all five tracks.", "cite_spans": [ { "start": 13, "end": 36, "text": "(Libovick\u00fd et al., 2020", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Submitted Systems", "sec_num": "3.2" }, { "text": "darkside (Nomoto, 2020) took a very different approach, treating the task as a paraphrase generation problem and using no data beyond what was provided for the shared task. They took two approaches, both based on autoencoders. The first is a sequence-to-sequence model with Gaussian noise added to the context vector, and the second is based on a conditional Variational Autoencoder, which has seen success in generating variations of input content in the literature (Bowman et al., 2015) . This ranked fifth in the Japanese track.", "cite_spans": [ { "start": 467, "end": 488, "text": "(Bowman et al., 2015)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Submitted Systems", "sec_num": "3.2" }, { "text": "nagoudi (Nagoudi et al., 2020) used a combination of data augmentation and ensembles. They combined STAPLE data with additional parallel corpora to train their models, finding (curiously) that this outperformed the fine-tuning approach employed by many others. They generated multiple translations by passing the source sentence through an ensemble of model training checkpoints, taking the n-best outputs from each and de-duplicating. This ranked fifth in the Portuguese track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Submitted Systems", "sec_num": "3.2" }, { "text": "In this section, we analyze different facets of the various approaches taken, in an effort to understand which design choices were most impactful on final results. We identified three major areas of variance: use of external training corpora ( \u00a74.1), model architecture and training procedures ( \u00a74.2), and decoding and filtering strategies ( \u00a74.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Analyses", "sec_num": "4" }, { "text": "The STAPLE dataset is relatively small compared to many modern machine translation efforts. This is by design: it is challenging to develop a parallel corpus that is complete with many acceptable translations. One of our goals in organizing this task was to see how teams could effectively leverage existing corpora, with a modest amount of in-domain data, to bootstrap high-quality models for the task. Most teams began with a generic MT system pre-trained on massive but out-of-domain parallel corpora, either before or in tandem with the STA-PLE task data. These were largely drawn from the Open Parallel Corpus (OPUS) project (Tiedemann, 2012) . One natural question is whether the choice to train on a particular dataset from this collection had any meaningful impact on final results.", "cite_spans": [ { "start": 630, "end": 647, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "External Training Corpora", "sec_num": "4.1" }, { "text": "To answer this question, we coded each team with features variables indicating each corpus they reported using for their final submission, and used a regression analysis to see if these data choices significantly impacted precision, weighted recall, and weighted F1 scores for each prompt in the test set 4 . To analyze this properly, however, we need to distinguish between effects among data choices are actually meaningful versus those that can be explained by sampling error due to random variations among prompts, tracks, or teams. To do this, we use a linear mixed-effects model (cf., Baayen, 2008, Ch. 7) . In addition to modeling the fixed effects of the various corpora, we can also model the random effects represented by the prompt ID (some sentences may be longer or harder), the track ID (the languages inherently vary), and the team ID (teams will differ in other aspects not captured by these corpus variables). Table 4 presents a mixed-effects analysis for the most-cited corpora among participating teams, each used by at least four different systems. The intercepts can be interpreted as \"average\" metrics, which then go up or down according to fixed and random effects. Only the Tatoeba corpus appears to have a significant positive impact on metrics. In other words, we might expect that pre-training with Tatoeba would add +.214 to prompt-specific F1 scores (p = .088), all else being equal. Since Tatoeba is a collaborative online database 5 of sentences geared towards foreign language learners (some of which even have multiple translations, although no weights), it is extremely similar to the STAPLE task domain. Thus it makes sense that this corpus would be helpful; in fact, sweagraw and jindra.helcl included it alongside the STAPLE data in fine-tuning their models. Other effects are smaller and statistically insignificant, suggesting that the particular choice of supplementary out-of-domain data may not matter as much as simply using a large amount. One notable exception is the parallel Wikipedia corpus (Wo\u0142k and Marasek, 2014) , which exhibits a large negative trend on recall and F1, possibly due to its noisy, automatically-aligned provenance.", "cite_spans": [ { "start": 591, "end": 611, "text": "Baayen, 2008, Ch. 7)", "ref_id": null }, { "start": 2039, "end": 2063, "text": "(Wo\u0142k and Marasek, 2014)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 927, "end": 934, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "External Training Corpora", "sec_num": "4.1" }, { "text": "The volume of parallel training data may also impact performance. For example, for the Korean track jbrem report internal results using similar datasets to sweagraw, and achieving the same score. But further experiments extending the training set yielded improvements of about +.1 F1. However, simply using larger corpora in pretraining does not guarantee higher scores: nagoudi apparently trained on all of OPUS, yet had the lowest Portuguese scores among participants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "External Training Corpora", "sec_num": "4.1" }, { "text": "Decisions made on model architecture and training procedures seemed to have more impact on final system performance. We mapped many of these design decisions into high-level system features, summarized at the top of Table 5 . 5 https://tatoeba.org", "cite_spans": [ { "start": 226, "end": 227, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 216, "end": 223, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Model Architecture & Training", "sec_num": "4.2" }, { "text": "Transformer vs. CNN. The baseline_fairseq we provided is based on a convolutional neural network (CNN) architecture, and a few teams also went this route. However, top-ranking teams largely opted for a Transformer-based architecture (Vaswani et al., 2017) instead. jspak3 notably used the BART architecture (Lewis et al., 2019) to pre-train a decoder in particular, and dcu also compared a phrase-based statistical MT approach (Koehn et al., 2007) to a Transformer-based neural MT system, with the latter performing better.", "cite_spans": [ { "start": 233, "end": 255, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF26" }, { "start": 307, "end": 327, "text": "(Lewis et al., 2019)", "ref_id": "BIBREF11" }, { "start": 427, "end": 447, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture & Training", "sec_num": "4.2" }, { "text": "LRF Weights. When training on STAPLE task data, teams had to decide how to convert the one-tomany relationship of prompts and accepted translations into standard bitext for more conventional MT training. Some teams simply repeated the English prompt for each target translation (as we did for baseline_fairseq), while others used only the highest-weighted translation. Some of the more successful teams took advantage of the weights associated with each accepted translation. In particular, jbrem included multiple copies of the highestweighted translation, nickeilf used only the top k, and sweagraw and rakchada both sampled each translation in proportion to its weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture & Training", "sec_num": "4.2" }, { "text": "Pre-train\u2192Fine-tune vs. Train Combined. Top-performing teams also tended to pre-train a generic MT model (e.g., trained on corpora from \u00a74.1) and fine-tune it using STAPLE task data. This is opposed to pooling all data together for joint training. The latter approach certainly outperformed STAPLE-only baselines, but lagged behind fine-tuned pipeline approaches in most cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture & Training", "sec_num": "4.2" }, { "text": "To measure the impact of these choices, we conducted a second mixed-effects regression analysis, coding each team with the model architecture and training decisions that describe their final submissions. Results are presented in Table 6 . Here we see empirical confirmation that Transformer-based systems tended to perform +.1 points better for all three metrics, although only marginally statistically significant (perhaps because it was also the most common choice). Incorporating LRF weights in the fine-tuning strategy also appears to have a robust positive effect (p < .05 across all metrics). The importance of the weighting strategy can be further illustrated by comparing jbrem with jindra.helcl. Both systems submitted to all five tracks, and otherwise used similar approaches. However, jbrem reports on an ablation experiment using only the top-weighted translation, the results of which are similar to those of jindra.helcl, who used this very strategy.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 236, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Model Architecture & Training", "sec_num": "4.2" }, { "text": "Finally, there is also a positive trend favoring pre-training on external corpora before fine-tuning, as opposed to training on all data combined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture & Training", "sec_num": "4.2" }, { "text": "Since the STAPLE task requires multiple translations for each input prompt, all teams generated n-best lists, and employed various strategies for pruning them to contain only desirable translations. The feature group at the bottom of Table 5 represent these decoding and filtering steps.", "cite_spans": [], "ref_spans": [ { "start": 234, "end": 241, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Decoding & Filtering", "sec_num": "4.3" }, { "text": "Diverse Beam Search. Multiple teams attempted to use diverse beam search (Vijayakumar et al., 2016) to generate a more varied set of tranlation candidates. However, it proved either to be only marginally helpful (nickeilf, mzy) or unhelpful (jspak3) in various ablation experiments.", "cite_spans": [ { "start": 73, "end": 99, "text": "(Vijayakumar et al., 2016)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Decoding & Filtering", "sec_num": "4.3" }, { "text": "Beam Reranking. Two teams tried training an auxiliary model to rank output candidates by predicted learner response frequencies. In both cases, this approach performed poorly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding & Filtering", "sec_num": "4.3" }, { "text": "Beam Filtering. Several teams attempted to filter candidate translations, which were applied to candidate translations to decide if they should be removed from final predictions. Approaches to this varied significantly, from language-model probabilities (jbrem) to binary classifiers including gradient-boosted decision trees (rakchada), feedforward neural networks (sweagraw), and multilingual transformers (jindra.helcl). nickeilf showed improvements using consensus voting among an ensemble of MT models, in which only sentences attested by multiple subsystems are retained. Most of these teams reported significant gains from filtering in ablation studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding & Filtering", "sec_num": "4.3" }, { "text": "Paraphrasing. Three teams implemented monolingual paraphrasing models to increase the size of their n-best list of candidates. jindra.helcl reported experiments with a Levenshtein Transformer (Gu et al., 2019) , a model that learns to create new paraphrases by editing candidate sentences. However, this produced output too noisy to be useful, and was omitted from their final submission.", "cite_spans": [ { "start": 192, "end": 209, "text": "(Gu et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Decoding & Filtering", "sec_num": "4.3" }, { "text": "Ensembling. A number of teams employed an ensemble of MT models, by combining either different training checkpoints, random initialization seeds, or other training regimes (such as training on reversed sequences, which was the main strategy used by masahiro, who tied for second in the Japanese track). Three teams also tried Backtranslation (Sennrich et al., 2016) , with mixed results.", "cite_spans": [ { "start": 342, "end": 365, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Decoding & Filtering", "sec_num": "4.3" }, { "text": "We conducted a mixed-effects analysis of decoding and filtering techniques, however, the effect sizes and p-values were much less significant than those from \u00a74.1 and \u00a74.2. These inconclusive results suggest that decoding and filtering play a smaller role in overall system performance than pre-training and model architecture decisions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding & Filtering", "sec_num": "4.3" }, { "text": "The learner response frequency weights tend to have a tall head: a few common responses carry most of the weight, and many more responses carry much less weight (e.g., many human-curated accepted translations were not ever attested by learners during our data collection window). Since this distribution determines weighted recall, and therefore our overall evaluation metric, it is instructive to compare against a benchmark \"oracle\" that is able to return the top-k gold translations. Table 7 shows results of such an oracle for several values of k evaluated over the test set. Table 7 : Weighted F1 scores on the test set for an \"oracle\" that outputs the top k translations from gold data. All translations (k = * ) gives a perfect score of 1.0. For comparison, we include teams who submitted to all tracks, and one baseline. Underscores show the smallest value of k to outperform jbrem (the top system).", "cite_spans": [], "ref_spans": [ { "start": 487, "end": 494, "text": "Table 7", "ref_id": null }, { "start": 580, "end": 587, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Scoring the Top-k Test Translations", "sec_num": "4.4" }, { "text": "At k = 1, macro weighted F1 is still relatively low, showing that systems need to return more than a single translation to do well. Comparing k = 1 to baseline_aws (both output a single translation) shows that this high-quality baseline still does not generally produce the translation favored by Duolingo learners. It is also worth noting that topranking systems output the k = 1 translation more often than that of baseline_aws (83% vs. 69%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring the Top-k Test Translations", "sec_num": "4.4" }, { "text": "The top-ranking teams performed on par with or better than the k = 5 oracle, and much better for languages with a higher translation-to-prompt ratio (see Table 2 ). This suggests that high-performing models for this task are consistently producing output comparable to the five most commonly-attested translations, and often beyond (at some expense to precision, for which the oracle is perfect).", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 161, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Scoring the Top-k Test Translations", "sec_num": "4.4" }, { "text": "So far we have discussed only quantitative outcomes for the STAPLE task. Here we present a qualitative analysis by inspecting the most common recall errors and precision errors among participating systems. These help us to get a sense for how important typical errors are for our educational use case, and shed light on what performance gaps need to be closed in future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "Alternative word order or synonym variations were a challenge for all teams in all tracks. For example, here are the top four accepted translations for a prompt in the Portuguese test dataset:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "1. please don't smoke por favor, n\u00e3o fume (w 1 = .663) n\u00e3o fume, por favor (w 2 = .030) por gentileza, n\u00e3o solte fuma\u00e7a (w 3 = .011) n\u00e3o fume, se faz favor (w 4 = .011)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "Most teams produced the top-weighted translation, several more identified other variants of please, but few systems generated reorderings that place it after the main clause (which, for this instance, accounts for \u2248 .184 of the total LRF weight). This can be partially explained by the use of fixed beam sizes. Since the number of translations grows exponentially with the number of lexical and structural variations, many correct combinations that the system could be capable of generating may still fall off the beam. One possible solution here would be to explore lattice-based decoding strategies that may avoid such bottlenecks. Korean, Japanese, and Vietnamese have diverse sets of pronouns for use with different registers and relationships to the subject and the listener, as seen in this example from Japanese:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "2. i exercise \u79c1 \u79c1 \u79c1\u306f\u904b\u52d5\u3059\u308b (top translation) \u50d5 \u50d5 \u50d5\u306f\u904b\u52d5\u3059\u308b (not in accepted translations)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "Here \u79c1 (watashi) is the most common first person pronoun, but about half the submissions instead produced \u50d5 (boku) which carries with it more youthful or masculine connotations. While the latter is arguably correct, learners (especially beginners) are unlikely to use it, and it was also missing from the human-curated set of translations. Pronouns were difficult in general, for multiple language tracks. All five languages allow some level of pronoun-dropping, as per these examples from Hungarian and Portuguese: 3. we run to the garden", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "[mi] futunk a kertbe 4. would you like to try on those shoes?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "[voc\u00ea] gostaria de provar esses sapatos?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "This resulted in both over-and under-use of pronouns, both in system outputs and occasionally gold data. While both variations (with or without the pronoun) may be correct, the rules governing which is more fluent or more appropriate for instruction are subtle, and remain challenging. Systems often produced verb suffixes that convey discourse nuances or speaker attitudes not necessarily present in the English prompt or its accepted translations, as per these Korean and Japanese translations generated by multiple teams:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "5. the woman is pretty \u1100 \u1173 \u110b \u1167\u110c \u1161\u1102 \u1173 \u11ab \u110b \u1168\u1108 \u1173\u1102 \u1166 \u1102 \u1166 \u1102 \u1166\u110b \u116d (\"wow, that woman is pretty\") 6. you are not a victim \u3042\u306a\u305f\u306f\u88ab\u5bb3\u8005\u3067\u306f\u306a\u3044\u3088 \u3088 \u3088 (\"you are not a victim, you know\")", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "One likely explanation for this is the pervasive use of OpenSubtitles data in pre-training; such suffixes are especially common in on-screen dialogue. Mistranslation of numbers was a common problem for multiple languages, which is unacceptable for education, or indeed most applications: 7. i have eighteen horses tizenh\u00e1rom lovam van (\"i have thirteen horses\") 8. she has sixteen cats \u5f7c\u5973\u306f\u732b\u3092\u516d \u516d \u516d\u5339\u98fc\u3063\u3066\u3044\u307e\u3059 (\"she has six cats\")", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "Correct noun declension was also a struggle for all systems, particularly the allative case in Hungarian (-hoz/-hez/-h\u00f6z); the following example was not produced by any system: 9. we run to the garden elrohanunk a kerthez Similarly, noun cases and postpositions in Korean led some systems to alter the sentence meaning:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "10. who do you love? \u1102 \u116e\u1100 \u1161 \u1102 \u1165\u1105 \u1173 \u11af \u1105 \u1173 \u11af \u1105 \u1173 \u11af \u1109 \u1161\u1105 \u1161 \u11bc\u1112 \u1161\u1102 \u1175 (\"who loves you?\") For Japanese, many systems frequently used English loanwords in their translations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "11. she makes me happy \u5f7c\u5973\u306f\u79c1\u3092\u30cf \u30cf \u30cf\u30c3 \u30c3 \u30c3\u30d4 \u30d4 \u30d4\u30fc \u30fc \u30fc\u306b\u3057\u3066\u304f\u308c\u308b (uses phonetic English loan for happy)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "These were generally missing from the gold data. Such loanwords are not especially rare, although one could also argue that using them is \"cheating\" in a language-learning context!", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.5" }, { "text": "The STAPLE task is similar to machine translation in that one takes input from one language, and produces output in another language. In fact, nearly all of the models used by participating teams were built using standard, off-the-shelf, modern machine translation software. But machine translation systems typically produce only a single output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Ultimately our goal for Duolingo-a robust system for automatically grading learner translation submissions-is closer to the world of machine translation evaluation. Motivated by shortcomings of the BLEU metric (Papineni et al., 2002) , some researchers have proposed alternative measures of evaluating MT systems against many references (Qin and Specia, 2015) , or even exhaustive translation sets collected by human translators, as with HyTER (Dreyer and Marcu, 2012) .", "cite_spans": [ { "start": 210, "end": 233, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF20" }, { "start": 337, "end": 359, "text": "(Qin and Specia, 2015)", "ref_id": "BIBREF23" }, { "start": 444, "end": 468, "text": "(Dreyer and Marcu, 2012)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We even considered using these alternatives as official metrics for the STAPLE task. The main challenge is the difficulty of gathering all possible translations (the authors of HyTER estimate that creating all translation variants for a single sentence can take two hours or more), or the assumption that the translations are all equally important. To ease the burden of manually collecting references, there have been proposals for automatically generating them (Apidianaki et al., 2018) using paraphrase databases such as PPDB (Pavlick et al., 2015) .", "cite_spans": [ { "start": 463, "end": 488, "text": "(Apidianaki et al., 2018)", "ref_id": "BIBREF1" }, { "start": 529, "end": 551, "text": "(Pavlick et al., 2015)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "This brings us to other areas of research that are very related to our task: automatic paraphrasing (Wieting et al., 2015; Witteveen and Andrews, 2019) , as well as research in diverse beam search methods (Vijayakumar et al., 2016; Li et al., 2016) for decoding multiple natural language outputs. We are happy that this shared task can serve as a forum for studying the intersection of these problems, and it is our hope that the STAPLE task data will continue to foster research in all of these areas.", "cite_spans": [ { "start": 100, "end": 122, "text": "(Wieting et al., 2015;", "ref_id": "BIBREF28" }, { "start": 123, "end": 151, "text": "Witteveen and Andrews, 2019)", "ref_id": "BIBREF29" }, { "start": 205, "end": 231, "text": "(Vijayakumar et al., 2016;", "ref_id": "BIBREF27" }, { "start": 232, "end": 248, "text": "Li et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We have presented the STAPLE task, described a new and unique corpus for studying it, and reported on the results of a shared task challenge designed to explore this new domain. The task successfully drew participation from dozens of research teams from all over the world, synthesizing work in machine translation, MT evaluation, and automatic paraphrasing to name a few.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "We learned that a pipeline of strong machine translation followed by fine-tuning on learnerweighted STAPLE data produces strong results. While the data for this task are geared toward language learners (and are therefore simpler than more commonly-studied domains such as newswire), it is our hope that the STAPLE task provides a blueprint for ongoing interdisciplinary work in this vein. All task data, including dev and test labels, will remain available at: https://doi.org/10.7910/DVN/38OJR6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "https://www.duolingo.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://sharedtask.duolingo.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Thus, a team participating in all five tracks would yield 5 \u00d7 500 = 2,500 data points for this regression analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank Colin Cherry for seeding the idea that ultimately became the STA-PLE task. Thanks also to the organizers of the Workshop on Neural Generation and Translation (WNGT) for providing a forum for this work, as well as all the participating teams. Special thanks to Nathan Dalal and Andrew Runge for help reviewing and summarizing the system papers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Generating diverse translations via weighted fine-tuning and hypotheses filtering for the Duolingo STAPLE task", "authors": [ { "first": "Sweta", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Marine", "middle": [], "last": "Carpuat", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sweta Agrawal and Marine Carpuat. 2020. Generating diverse translations via weighted fine-tuning and hy- potheses filtering for the Duolingo STAPLE task. In Proceedings of the ACL Workshop on Neural Gener- ation and Translation (WNGT). ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Automated paraphrase lattice creation for HyTER machine translation evaluation", "authors": [ { "first": "Marianna", "middle": [], "last": "Apidianaki", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wisniewski", "suffix": "" }, { "first": "Anne", "middle": [], "last": "Cocos", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "480--485", "other_ids": { "DOI": [ "10.18653/v1/N18-2077" ] }, "num": null, "urls": [], "raw_text": "Marianna Apidianaki, Guillaume Wisniewski, Anne Cocos, and Chris Callison-Burch. 2018. Auto- mated paraphrase lattice creation for HyTER ma- chine translation evaluation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 2 (Short Pa- pers), pages 480-485, New Orleans, Louisiana. As- sociation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Analyzing Linguistic Data: A Practical Introduction to Statistics using R", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Baayen", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R.H. Baayen. 2008. Analyzing Linguistic Data: A Practical Introduction to Statistics using R. Cam- bridge University Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Generating sentences from a continuous space", "authors": [ { "first": "Luke", "middle": [], "last": "Samuel R Bowman", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "M", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.06349" ] }, "num": null, "urls": [], "raw_text": "Samuel R Bowman, Luke Vilnis, Oriol Vinyals, An- drew M Dai, Rafal Jozefowicz, and Samy Ben- gio. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Simultaneous paraphrasing and translation by fine-tuning transformer models", "authors": [], "year": null, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rakesh Chada. 2020. Simultaneous paraphrasing and translation by fine-tuning transformer models. In Proceedings of the ACL Workshop on Neural Gen- eration and Translation (WNGT). ACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "HyTER: Meaning-equivalent semantics for translation evaluation", "authors": [ { "first": "Markus", "middle": [], "last": "Dreyer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "162--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Markus Dreyer and Daniel Marcu. 2012. HyTER: Meaning-equivalent semantics for translation evalu- ation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 162-171, Montr\u00e9al, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Levenshtein transformer", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Changhan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "11179--11189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Advances in Neural In- formation Processing Systems, pages 11179-11189.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The ADAPT system description for the STA-PLE 2020 English-to-Portuguese translation task", "authors": [ { "first": "Rejwanul", "middle": [], "last": "Haque", "suffix": "" }, { "first": "Yasmin", "middle": [], "last": "Moslem", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rejwanul Haque, Yasmin Moslem, and Andy Way. 2020. The ADAPT system description for the STA- PLE 2020 English-to-Portuguese translation task. In Proceedings of the ACL Workshop on Neural Gener- ation and Translation (WNGT). ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "English-to-Japanese diverse translation by combining forward and backward outputs", "authors": [ { "first": "Masahiro", "middle": [], "last": "Kaneko", "suffix": "" }, { "first": "Aizhan", "middle": [], "last": "Imankulova", "suffix": "" }, { "first": "Tosho", "middle": [], "last": "Hirasawa", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masahiro Kaneko, Aizhan Imankulova, Tosho Hira- sawa, and Mamoru Komachi. 2020. English-to- Japanese diverse translation by combining forward and backward outputs. In Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The JHU submission to the 2020 Duolingo shared task on simultaneous translation and paraphrase for language education", "authors": [ { "first": "Huda", "middle": [], "last": "Khayrallah", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Bremerman", "suffix": "" }, { "first": "D", "middle": [], "last": "Arya", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Mc-Carthy", "suffix": "" }, { "first": "Winston", "middle": [], "last": "Murray", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Post", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huda Khayrallah, Jacob Bremerman, Arya D. Mc- Carthy, Kenton Murray, Winston Wu, and Matt Post. 2020. The JHU submission to the 2020 Duolingo shared task on simultaneous translation and para- phrase for language education. In Proceedings of the ACL Workshop on Neural Generation and Trans- lation (WNGT). ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180, Prague, Czech Republic. As- sociation for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Ves", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.13461" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A simple, fast diverse decoding algorithm for neural generation", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Will", "middle": [], "last": "Monroe", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.08562" ] }, "num": null, "urls": [], "raw_text": "Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. A sim- ple, fast diverse decoding algorithm for neural gen- eration. arXiv preprint arXiv:1611.08562.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Exploring model consensus to generate translation paraphrases", "authors": [ { "first": "Zhenhao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenhao Li, Marina Fomicheva, and Lucia Specia. 2020. Exploring model consensus to generate trans- lation paraphrases. In Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Expand and filter: CUNI and LMU systems for the WNGT 2020 Duolingo shared task", "authors": [ { "first": "Jind\u0159ich", "middle": [], "last": "Libovick\u00fd", "suffix": "" }, { "first": "Zden\u011bk", "middle": [], "last": "Kasner", "suffix": "" }, { "first": "Jind\u0159ich", "middle": [], "last": "Helcl", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jind\u0159ich Libovick\u00fd, Zden\u011bk Kasner, Jind\u0159ich Helcl, and Ond\u0159ej Du\u0161ek. 2020. Expand and filter: CUNI and LMU systems for the WNGT 2020 Duolingo shared task. In Proceedings of the ACL Workshop on Neu- ral Generation and Translation (WNGT). ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Growing together: Modeling human language learning with nbest multi-checkpoint machine translation", "authors": [], "year": 2020, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "El Moatez Billah Nagoudi, Muhammad Abdul- Mageed, and Hasan Cavusoglu. 2020. Growing to- gether: Modeling human language learning with n- best multi-checkpoint machine translation. In Pro- ceedings of the ACL Workshop on Neural Genera- tion and Translation (WNGT). ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Meeting the 2020 Duolingo challenge on a shoestring", "authors": [], "year": null, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tadashi Nomoto. 2020. Meeting the 2020 Duolingo challenge on a shoestring. In Proceedings of the ACL Workshop on Neural Generation and Transla- tion (WNGT). ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.01038" ] }, "num": null, "urls": [], "raw_text": "Michael Auli. 2019. fairseq: A fast, extensi- ble toolkit for sequence modeling. arXiv preprint arXiv:1904.01038.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "User's guide to sigf: Significance testing by approximate randomisation", "authors": [ { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Pad\u00f3. 2006. User's guide to sigf: Signifi- cance testing by approximate randomisation.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "POSTECH submission on Duolingo shared task", "authors": [ { "first": "Junsu", "middle": [], "last": "Park", "suffix": "" }, { "first": "Hongseok", "middle": [], "last": "Kwon", "suffix": "" }, { "first": "Jong-Hyeok", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junsu Park, Hongseok Kwon, and Jong-Hyeok Lee. 2020. POSTECH submission on Duolingo shared task. In Proceedings of the ACL Workshop on Neu- ral Generation and Translation (WNGT). ACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Pushpendre", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "425--430", "other_ids": { "DOI": [ "10.3115/v1/P15-2070" ] }, "num": null, "urls": [], "raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 425-430, Beijing, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Truly exploring multiple references for machine translation evaluation", "authors": [ { "first": "Ying", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 18th Annual Conference of the European Association for Machine Translation", "volume": "", "issue": "", "pages": "113--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Qin and Lucia Specia. 2015. Truly exploring mul- tiple references for machine translation evaluation. In Proceedings of the 18th Annual Conference of the European Association for Machine Translation, pages 113-120, Antalya, Turkey.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "86--96", "other_ids": { "DOI": [ "10.18653/v1/P16-1009" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Parallel data, tools and interfaces in opus", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Diverse beam search: Decoding diverse solutions from neural sequence models", "authors": [ { "first": "K", "middle": [], "last": "Ashwin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Vijayakumar", "suffix": "" }, { "first": "", "middle": [], "last": "Cogswell", "suffix": "" }, { "first": "R", "middle": [], "last": "Ramprasath", "suffix": "" }, { "first": "Qing", "middle": [], "last": "Selvaraju", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "David", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Crandall", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1610.02424" ] }, "num": null, "urls": [], "raw_text": "Ashwin K Vijayakumar, Michael Cogswell, Ram- prasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural se- quence models. arXiv preprint arXiv:1610.02424.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Towards universal paraphrastic sentence embeddings", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Livescu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Towards universal paraphrastic sen- tence embeddings. CoRR, abs/1511.08198.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Paraphrasing with large language models", "authors": [ { "first": "Sam", "middle": [], "last": "Witteveen", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Andrews", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation", "volume": "", "issue": "", "pages": "215--220", "other_ids": { "DOI": [ "10.18653/v1/D19-5623" ] }, "num": null, "urls": [], "raw_text": "Sam Witteveen and Martin Andrews. 2019. Paraphras- ing with large language models. In Proceedings of the 3rd Workshop on Neural Generation and Trans- lation, pages 215-220, Hong Kong. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Building subject-aligned comparable corpora and mining it for truly parallel sentence pairs", "authors": [ { "first": "Krzysztof", "middle": [], "last": "Wo\u0142k", "suffix": "" }, { "first": "Krzysztof", "middle": [], "last": "Marasek", "suffix": "" } ], "year": 2014, "venue": "Procedia Technology", "volume": "18", "issue": "", "pages": "126--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krzysztof Wo\u0142k and Krzysztof Marasek. 2014. Build- ing subject-aligned comparable corpora and mining it for truly parallel sentence pairs. Procedia Technol- ogy, 18:126-132.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Multi-step fine-tuning and encouraging diversity of high-coverage neural machine translation", "authors": [ { "first": "Michael", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yixin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Mayuranath", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Yang, Yixin Liu, and Rahul Mayuranath. 2020. Multi-step fine-tuning and encouraging diversity of high-coverage neural machine translation. In Pro- ceedings of the ACL Workshop on Neural Genera- tion and Translation (WNGT). ACL.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Screenshots from the Duolingo app (iOS, circa 2020), showing translation exercises for English prompts into Portuguese. The first two examples show correct student translations, with Duolingo suggesting an alternate, preferred translation in the second case. The third and fourth responses show incorrect translations.", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "Generalized pipeline used by most systems.", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "Precision and weighted recall for each system and language. The dashed line represents equal precision and weighted recall. Curved lines represent weighted F1 in increments of 0.1.", "type_str": "figure", "uris": null, "num": null }, "TABREF4": { "text": "Mixed-effects analysis of the most commonlycited external corpora used for training.", "type_str": "table", "html": null, "content": "