{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:06:38.220094Z" }, "title": "Exploring Model Consensus to Generate Translation Paraphrases", "authors": [ { "first": "Zhenhao", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Imperial College London", "location": {} }, "email": "zhenhao.li18@imperial.ac.uk" }, { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sheffield", "location": {} }, "email": "m.fomicheva@sheffield.ac.uk" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "", "affiliation": {}, "email": "l.specia@imperial.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes our submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). This task focuses on improving the ability of neural MT systems to generate diverse translations. Our submission explores various methods, including Nbest translation, Monte Carlo dropout, Diverse Beam Search, Mixture of Experts, Ensembling, and Lexical Substitution. Our main submission is based on the integration of multiple translations from multiple methods using Consensus Voting. Experiments show that the proposed approach achieves a considerable degree of diversity without introducing noisy translations. Our final submission 1 achieves 0.5510 weighted F1 score on the blind test set for the English-Portuguese track.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper describes our submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). This task focuses on improving the ability of neural MT systems to generate diverse translations. Our submission explores various methods, including Nbest translation, Monte Carlo dropout, Diverse Beam Search, Mixture of Experts, Ensembling, and Lexical Substitution. Our main submission is based on the integration of multiple translations from multiple methods using Consensus Voting. Experiments show that the proposed approach achieves a considerable degree of diversity without introducing noisy translations. Our final submission 1 achieves 0.5510 weighted F1 score on the blind test set for the English-Portuguese track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine Translation (MT) systems are typically used to produce a single output for a given source sentence, whereas in human translation the same source sentence can often be translated in various different ways while still preserving its meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE) (Mayhew et al., 2020) , participating MT systems are evaluated using multiple reference translations to measure their ability to generate diverse, yet high quality translations. For that, a new dataset with multiple human translations for each source sentence is provided. These human translations were produced by language learners as part of a translation exercise on the Duolingo platform 2 where they were asked to translate sentences from the language they were learning (e.g. English) to their native language. Each translation in the dataset is assigned a weight based on the learner response frequency. Table 1 gives an example of the weighted translations in the dataset for English-Portuguese. The STAPLE dataset includes five language pairs: English to Portuguese, Hungarian, Japanese, Korean, and Vietnamese. In the shared task, we only participated in English-Portuguese (En-Pt) track. In this paper, we experiment with various methods to improve the diversity of translations, while preserving their quality. We show that simply by generating N-best translations with larger beam size, we can already achieve a considerable degree of diversity. Our final submission is based on the integration of multiple translations from various methods, namely N-best translation, Monte Carlo dropout, Mixture of Experts, Ensembling, and Lexical Substitution, through a consensus voting mechanism. It achieves 0.5510 weighted F1 score on the official blind test set.", "cite_spans": [ { "start": 108, "end": 129, "text": "(Mayhew et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 719, "end": 726, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is structured as follows: Section 2 describes the methods we used in our experiments. Section 3 introduces the experimental settings, including data preparation, model hyperparameters, and the evaluation procedure. Section 4 describes the results and analysis. Section 5 presents our three official submissions to STAPLE blind test set. Finally, Section 6 summarises our submission to the shared task and our contributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In what follows we describe the methods used in our experiments, including N-best translation, Monte Carlo dropout, Diverse Beam Search, Mixture of Experts, Ensembling and Lexical Substitution. We combine all of these methods except the Diverse Beam Search in our official submissions through a consensus voting mechanism. Details about the submissions can be found in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "2" }, { "text": "The simplest method to generate multiple translations for a given sentence is to use N-best translations with a large beam size during decoding. Larger beam size might lead to more translation options with similar meanings. We experimented with multiple sizes for N , and used the same value for N-best and beam size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-best", "sec_num": "2.1" }, { "text": "Gal and Ghahramani (2016) proposed the Monte Carlo (MC) dropout method to estimate predictive NMT model uncertainty. The method consists in running several forward passes through the model (i.e., at inference time), each applying dropout before every weight layer and collecting posterior probabilities generated by the model with parameters perturbed by dropout. The mean and variance of the resulting distribution can then be used to represent model uncertainty. Instead of using this method for scoring translations, we use it as a way to generate alternative MT hypotheses for a given source sentence. Specifically, we run inference with dropout M times and collect the resulting translations. In our experiments, the dropout rate is set to 0.1 and M = 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MC Dropout", "sec_num": "2.2" }, { "text": "Vijayakumar et al. 2016proposed the Diverse Beam Search algorithm to improve the diversity of beam hypotheses. The algorithm proceeds by dividing the beam budget into groups and enforcing diversity between groups of beams. In our experiments we use the implementation of this algorithm in fairseq with default parameters. Shen et al. (2019) introduced the Mixture of Experts (MoE) framework to capture the inherent uncertainty of the MT task where the same input sen-tence can have multiple correct translations. A mixture model introduces a multinomial latent variable to control generation and produce a diverse set of MT hypotheses. In our experiment we use hard mixture model with uniform prior and 5 mixture components.", "cite_spans": [ { "start": 322, "end": 340, "text": "Shen et al. (2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Diverse Beam Search", "sec_num": "2.3" }, { "text": "Training an ensemble of various MT models initialized with different random seeds is a common strategy used to boost the output quality (Garmash and Monz, 2016) . Unlike the typical ensembling method that combines prediction distributions from different models by averaging, we use each system in the ensemble to generate a separate set of translation hypotheses, and take the set of dictinct translations as the final output.", "cite_spans": [ { "start": 136, "end": 160, "text": "(Garmash and Monz, 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Ensembling", "sec_num": "2.5" }, { "text": "In the STAPLE dataset, we observed that many of the paraphrases in translations are simple variants with word substitutions in the target language. Therefore, we built a dictionary containing all lexical substitutions from the STAPLE training data. The substitutions are sorted according to two criteria: 1) number of occurrences 2) substitution probability. The substitution probability is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical substitution", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (sub) = Count(sub(w1, w2)) Count(w1)", "eq_num": "(1)" } ], "section": "Lexical substitution", "sec_num": "2.6" }, { "text": "The top-5 lexical substitutions from frequencysorted and probability-sorted dictionaries are listed in Table 2 . We filtered the substitution dictionary with a stopword list 3 and a threshsold (which can be either frequency count or substitution probability), to avoid generating ungrammatical translations.", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 110, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Lexical substitution", "sec_num": "2.6" }, { "text": "Probability substitution count substitution prob neste-nesse 5091 baixar->descarregar 1.0 ir\u00e1-vai 4920 descarregar->baixar 1.0 vou-irei 4645 situa-se->fica 1.0 local-lugar 2989 achasse->encontrasse 1.0 bem-bastante 2694 localizasse->achasse 1.0 Table 2 : Top-5 lexical substitutions in frequency-sorted and probability-sorted dictionaries.", "cite_spans": [], "ref_spans": [ { "start": 245, "end": 252, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Frequency", "sec_num": null }, { "text": "To integrate translations from different models, we employed a consensus voting mechanism by counting the number of systems that predicted each translation. A threshold T con is set, meaning that a translation must be predicted by at least T con + 1 systems, otherwise it is removed. Considering the lexical translation might generate rare but correct translation, we assign the lexical-substituted translations a weight W sub so that they can be seen as generated by W sub systems. The consensus method guarantees a high precision by removing translations that are likely to be incorrect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Consensus voting", "sec_num": "2.7" }, { "text": "To build the NMT model, we used parallel corpora for En-Pt from OPUS (Tiedemann, 2012) as out-ofdomain data, including ParaCrawl 4 , EUbookshop 5 , Europarl 6 , Wikipedia 7 , QED 8 , and Tatoeba 9 . The combination of these corpora contains 22.42 million parallel sentence pairs. The STAPLE dataset, which contains 4000 source sentences with 526,466 translations, is used as in-domain data for finetuning.", "cite_spans": [ { "start": 69, "end": 86, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments 3.1 Data", "sec_num": "3" }, { "text": "Since in the STAPLE dataset a source sentence have an average number of 131 reference translations, we constructed parallel data by duplicating the source sentence to match the number of translations, as shown in Figure 1 . We also experimented with different data filtering strategies on the STAPLE dataset by only keeping the top-K translations with the highest weights (we refer to this as tune-K). Statistics regarding the corpus size after filtering are shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 213, "end": 221, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 468, "end": 475, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments 3.1 Data", "sec_num": "3" }, { "text": "tune-5 20,000 5.00 tune-10 40,000 10.00 tune-20 78,439 19.61 tune-all 526,466", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Source Translations", "sec_num": null }, { "text": "131.62 All sentences are tokenized with Moses (Koehn et al., 2007) , and then processed via Byte-Pair-Encoding (BPE) (Sennrich et al., 2016) . A shared vocabulary of 40,000 subwords is constructed for both English and Portuguese. The training data was then cleaned by removing sentence pairs with more than 250 subwords or with length ratio over 1.5, using the clean-corpus-n.perl 10 script in Moses.", "cite_spans": [ { "start": 46, "end": 66, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF2" }, { "start": 117, "end": 140, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Filtering Source Translations", "sec_num": null }, { "text": "We used the Transformer model (Vaswani et al., 2017) as our baseline model. The model is trained using fairseq toolkit with the default hyperparameter settings using transformer_wmt_en_de architecture. The model was trained on 8 GPUs with a batch size of 4096 tokens on each GPU. We used mixedprecision training to accelerate the training. The model was pre-trained on OPUS data for 30 epochs and then fine-tuned on STAPLE data. We set 5 as the number of experts for training the MoE system. For ensembling, we pretrained with 3 random seeds and fine-tuned with 4 random seeds, resulting in 12 different MT systems.", "cite_spans": [ { "start": 30, "end": 52, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Model and hyperparameters", "sec_num": "3.2" }, { "text": "When generating an integration of translations from multiple systems, we follows the procedure as described below: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generation of Translations", "sec_num": "3.3" }, { "text": "The shared task provides a blind dev set (blind-dev) and a blind test set (blind-test) for evaluation. Since the number of submissions is limited, we also take a small random split from the STAPLE training set for dev (heldout-dev) and test (heldout-test) sets with 500 source sentences. The translations are evaluated at sentence-level as a classification problem where true positives (TP) occur when the system produces one of the translations in the given set of references, false positives (FP) when a translation out of this set is produced, and false negatives (FN) when translations in this set are missed by the system. The official evaluation metric is a weighted macro F1-score averaging over all source sentences. The weighted F1 score is calculated with weighted recall and unweighted precision:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.4" }, { "text": "recall = t\u2208T P weight(t) precision = T P T P + F P weighted F 1 = 2 * precision * recall precision + recall weighted macro F 1 = s\u2208S weighted F 1 (s) |S|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.4" }, { "text": "N-best We present the F1 score with respect to n-best size (from 1 to 20) in Figure 2 . The models fine-tuned with different filtered data are evaluated on our heldout test set. As shown in Figure 2 , the pre-trained model (tune-0) shows a poorer performance than the other fine-tuned models.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 190, "end": 198, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The tune-1 model shows a good performance when the N-best size is small, but experiences a degradation when N-best increases. Models fine-tuned with 5, 10, and 20 reference translations show similar performances with F1 score around 0.49. However, the optimal n-best size is closely related to the number of translations used for fine-tuning, with N-best=3,10,12,18 for model tuned with 1, 5, 10 and 20 references respectively. The models fine-tuned with all translations in the STAPLE dataset show a growing trend in F1 score as n-best size increases, but the overall F1 score is still much lower than for the three fine-tuned models. We found that the upper bound for tune-all model is around 0.415 F1 score. Table 4 shows a comparison on the heldout-test set between the N-best and N-best with MC dropout. It can be seen that the N-best12 achieves a higher recall than the N-best5, which leads to an increase of 0.038 in F1 score. When decoding with dropout, the N-best5 could match the performance of N-best12. Although noticing that MC Dropout could improve the performance for small N-best size, we found that when the N-best size gets larger the weighted F1 score does not improve further. Diverse beam search When evaluating diverse beam search on the heldout-test set, we found that the model performance lags behind the N-best baseline to a large extent, with F1 score of only 0.292. We looked into some translation examples and noticed that although diverse beam search can lead to more diversity in translations, it sometimes adds an extra full stop at the end of translations.", "cite_spans": [], "ref_spans": [ { "start": 711, "end": 718, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Considering that the evaluation is conducted at sentence-level, such a minor modification can lead to a large false positive number. In the final submission, we left this method out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MC dropout", "sec_num": null }, { "text": "Mixture of experts Regarding the MoE method, we found that different experts show inconsistent performance. As shown in Table 5 , with the same N-best size, experts 2, 3, and 5 show a good performance, achieving an F1 score over 0.4. However, the other two experts, especially expert 4, exhibit poorer performance. This might be caused by insufficient training for the experts that perform poorly. In the final submission, we removed translations from experts 1 and 4 to avoid incorrect predictions. Table 5 : An illustration of the inconsistent performance from different experts in MoE (with N-best=12).", "cite_spans": [], "ref_spans": [ { "start": 120, "end": 127, "text": "Table 5", "ref_id": null }, { "start": 500, "end": 507, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "MC dropout", "sec_num": null }, { "text": "Ensemble & Consensus In Table 6 , we present our ensembling submission and consensus submission (with threshold T con set to 1) on the blind-dev set. Both ensembling and consensus voting improve over the N-best by increasing the recall and reducing the precision. However, since consensus voting removed translations with fewer votes from other systems, the precision score is higher than that of ensembling while the recall is similar. This leads to a higher F1 score with the consensus submission.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 31, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Expert Precision", "sec_num": null }, { "text": "Precision Recall F1 N-best 0.714 0.483 0.521 +Ensemble 0.617 0.549 0.523 +Consensus(T con = 1) 0.652 0.534 0.530 Table 6 : A comparison between ensembling and consensus voting.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Expert Precision", "sec_num": null }, { "text": "Ensembling can be seen as a special case of consensus voting, with the threshold T con being zero. Ensembling maximizes the recall by taking translations from all the systems but sacrifices the precision. Increasing the value of the threshold T con would compensate for the precision loss while maintaining the gain in recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expert Precision", "sec_num": null }, { "text": "Lexical substitution Table 7 shows the submissions on the blind-dev set after applying lexical substitution to a consensus output combining ensembled N-best, MC dropout, and MoE systems.We first generated a set of translations with all lexical substitutions, using the translations from an N-best system. The translations with lexical substitution achieve an F1 score of 0.127, which shows potential benefits of this method. However, as shown in Table 7 , simply adding the substituted translations will harm performance, and this will happen for both frequency-based sorting and probability-based sorting. This is due to the fact that the translations after substitution are likely to be ungrammatical since the substituted word does not fit in the context. To alleviate this, we added the substituted translations to the consensus pool for higher precision. This only improves over the consensus system without lexical substitution by +0.002 F1 score.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 7", "ref_id": null }, { "start": 446, "end": 453, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Expert Precision", "sec_num": null }, { "text": "0.127 Consensus(T con = 5) 0.542 +lexical (freq > 1000) 0.512 +lexical (prob > 0.85) 0.532 +lexical (prob > 0.85, consensus) 0.544 Table 7 : An illustraction of the benefit and harm from lexical substitution (evaluated on blind-dev set). The Consensus system combines the ensembled N-bset, MC-Dropout, and MoE systems.", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Lexical only", "sec_num": null }, { "text": "In the experiment combining theses methods, we found that the N-best translations contributes the most score among all these methods. While an N-best system could achieve a weighted F1 score of nearly 0.5, other methods such as MC-Dropout, Ensembling and Consensus would only result in an extra improvement of less than 0.05 weighted F1 score. In our experiments, Diverse Beam Search and Mixture of Experts systems didn't contribute much.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical only", "sec_num": null }, { "text": "Our official submissions combine translations from 12 tune-10 N-best systems (12 random seeds, finetuned with top-10 references, N = 12), 12 tune-20 N-best systems (12 random seeds, finetuned with top-20 references, N = 20), 2 MC Dropout systems (n = 3, M = 50; n = 5, M = 10 ), 3 experts from the MoE system, and lexical substitution (with a probability threshold of 0.7). The consensus voting threshold T con is set to be 10, and the weight W sub for lexical substitution is 9. Results for our three official submissions to the blind test set are shown in Table 8 . The best submission, which achieves the best F1 score of 0.5510, applies both consensus voting and lexical substitution. As shown in the second submission, removing lexical substitution would reduce the F1 score by 0.006, although the precision is improved marginally. In the third submission, we set the consensus voting threshold T to be 1 to see the upper bound for recall. The recall increases from 0.516 to 0.580 while the precision drops significantly from 0.741 to 0.579.", "cite_spans": [], "ref_spans": [ { "start": 558, "end": 565, "text": "Table 8", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Official submissions", "sec_num": "5" }, { "text": "Our best submission achieves the second position in the English-Portuguese track, with only 0.0006 weighted F1 score behind the winning submission. The official result on STAPLE test set is shown in Table 9 .", "cite_spans": [], "ref_spans": [ { "start": 199, "end": 206, "text": "Table 9", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Official submissions", "sec_num": "5" }, { "text": "Weighted F1 jbrem 0.5516 Ours 0.5510 rakchada 0.5440 aws baseline 0.2130 fairseq baseline 0.1357 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participant", "sec_num": null }, { "text": "This paper describes our submissions to the STA-PLE shared task for English-Portuguese translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "We showed that simply generating N-best translations already achieves a considerable degree of diversity and quality. We experimented with various methods to improve the diversity in the MT output, including N-best translation, MC Dropout, Diverse Beam Search, Mixture of Experts, Ensembling, Consensus Voting, and Lexical Substitution. We showed the benefits and drawbacks of these methods in generating diverse, high quality translations. Our systems combining these methods further improve over the N-best translation and achieve 0.5510 weighted F1 score on STAPLE blind test set, which is only 0.0006 behind the winning submission.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "A.1 Checkpoiting vs tune-K Table 10 presents the best finetuning checkpoint for models finetuned with different number of references. Models trained with more references might converge faster, and when the tuning number is larger than 40, only 1 epoch is used for finetuning.", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 35, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "A Appendices", "sec_num": null }, { "text": "Finetuning Best checkpoint tune-1 10 tune-5 10 tune-10 6 tune-20 4 tune-40 1 tune-all 1 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Appendices", "sec_num": null }, { "text": "To provide a comprehensive understanding of the different methods, we selectively list our submissions to the blind-dev set in Table 11 .", "cite_spans": [], "ref_spans": [ { "start": 127, "end": 135, "text": "Table 11", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "A.2 Submission on blind-dev set", "sec_num": null }, { "text": "https://github.com/Nickeilf/STAPLE20 2 https://www.duolingo.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://snowball.tartarus.org/ algorithms/portuguese/stop.txt", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://opus.nlpl.eu/ParaCrawl-v5.php 5 http://opus.nlpl.eu/EUbookshop-v2.php 6 http://opus.nlpl.eu/Europarl-v8.php 7 http://opus.nlpl.eu/Wikipedia-v1.0. php 8 http://opus.nlpl.eu/QED-v2.0a.php 9 http://opus.nlpl.eu/Tatoeba-v20190709. php", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ training/clean-corpus-n.perl", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "authors": [ { "first": "Yarin", "middle": [], "last": "Gal", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" } ], "year": 2016, "venue": "international conference on machine learning", "volume": "", "issue": "", "pages": "1050--1059", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian Approximation: Representing Model Un- certainty in Deep Learning. In international confer- ence on machine learning, pages 1050-1059.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Ensemble learning for multi-source neural machine translation", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Garmash", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1409--1418", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ekaterina Garmash and Christof Monz. 2016. Ensem- ble learning for multi-source neural machine trans- lation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguis- tics: Technical Papers, pages 1409-1418.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180, Prague, Czech Republic. As- sociation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Simultaneous translation and paraphrase for language education", "authors": [ { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Klinton", "middle": [], "last": "Bicknell", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brust", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Mcdowell", "suffix": "" }, { "first": "Will", "middle": [], "last": "Monroe", "suffix": "" }, { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Mayhew, Klinton Bicknell, Chris Brust, Bill McDowell, Will Monroe, and Burr Settles. 2020. Si- multaneous translation and paraphrase for language education. In Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL-HLT 2019: Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Mixture models for diverse machine translation: Tricks of the trade", "authors": [ { "first": "Tianxiao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.07816" ] }, "num": null, "urls": [], "raw_text": "Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. arXiv preprint arXiv:1902.07816.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Parallel data, tools and interfaces in opus", "authors": [ { "first": "Jrg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jrg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Diverse beam search: Decoding diverse solutions from neural sequence models", "authors": [ { "first": "K", "middle": [], "last": "Ashwin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Vijayakumar", "suffix": "" }, { "first": "", "middle": [], "last": "Cogswell", "suffix": "" }, { "first": "R", "middle": [], "last": "Ramprasath", "suffix": "" }, { "first": "Qing", "middle": [], "last": "Selvaraju", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "David", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Crandall", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1610.02424" ] }, "num": null, "urls": [], "raw_text": "Ashwin K Vijayakumar, Michael Cogswell, Ram- prasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural se- quence models. arXiv preprint arXiv:1610.02424.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Constructing parallel fine-tuning data from the STAPLE dataset." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "F1 score w.r.t N-best size for models finetuned with different number of reference translations." }, "TABREF0": { "num": null, "html": null, "content": "
: A comparison between N-best and N-best with |
MC Dropout. |