{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:06:52.321844Z" }, "title": "English-to-Japanese Diverse Translation by Combining Forward and Backward Outputs", "authors": [ { "first": "Masahiro", "middle": [], "last": "Kaneko", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Metropolitan University", "location": { "addrLine": "6-6 Asahigaoka", "postCode": "191-0065", "settlement": "Hino", "region": "Tokyo", "country": "Japan" } }, "email": "kaneko-masahiro@ed.tmu.ac.jp" }, { "first": "Aizhan", "middle": [], "last": "Imankulova", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Metropolitan University", "location": { "addrLine": "6-6 Asahigaoka", "postCode": "191-0065", "settlement": "Hino", "region": "Tokyo", "country": "Japan" } }, "email": "imankulova-aizhan@ed.tmu.ac.jp" }, { "first": "Tosho", "middle": [], "last": "Hirasawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Metropolitan University", "location": { "addrLine": "6-6 Asahigaoka", "postCode": "191-0065", "settlement": "Hino", "region": "Tokyo", "country": "Japan" } }, "email": "hirasawa-tosho@ed.tmu.ac.jp" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Metropolitan University", "location": { "addrLine": "6-6 Asahigaoka", "postCode": "191-0065", "settlement": "Hino", "region": "Tokyo", "country": "Japan" } }, "email": "komachi@tmu.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce our TMU system that is submitted to The 4th Workshop on Neural Generation and Translation (WNGT2020) to Englishto-Japanese (En\u2192Ja) track on Simultaneous Translation And Paraphrase for Language Education (STAPLE) shared task. In most cases machine translation systems generate a single output from the input sentence, however, in order to assist language learners in their journey with better and more diverse feedback, it is helpful to create a machine translation system that is able to produce diverse translations of each input sentence. However, creating such systems would require complex modifications in a model to ensure the diversity of outputs. In this paper, we investigated if it is possible to create such systems in a simple way and whether it can produce desired diverse outputs. In particular, we combined the outputs from forward and backward neural translation models (NMT). Our system achieved third place in En\u2192Ja track, despite adopting only a simple approach.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We introduce our TMU system that is submitted to The 4th Workshop on Neural Generation and Translation (WNGT2020) to Englishto-Japanese (En\u2192Ja) track on Simultaneous Translation And Paraphrase for Language Education (STAPLE) shared task. In most cases machine translation systems generate a single output from the input sentence, however, in order to assist language learners in their journey with better and more diverse feedback, it is helpful to create a machine translation system that is able to produce diverse translations of each input sentence. However, creating such systems would require complex modifications in a model to ensure the diversity of outputs. In this paper, we investigated if it is possible to create such systems in a simple way and whether it can produce desired diverse outputs. In particular, we combined the outputs from forward and backward neural translation models (NMT). Our system achieved third place in En\u2192Ja track, despite adopting only a simple approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "1 Introduction WNGT2020 1 on STAPLE 2 (Mayhew et al., 2020) addresses generating high-coverage sets of plausible translations which can be useful in machine translation (MT), MT evaluation, multilingual paraphrase, and language education technology fields. In Duolingo (the world's largest language learning platform), some learning takes place via translation-based exercises and assessment is done by comparing the learners' responses to a large set of acceptable human-generated translations. Therefore, retaining richer paraphrases of the translation results would help to generate more accurate feedback to the learners. Several studies have been conducted on the diversity of translation results (Vijayakumar et al., 2018; Xu et al., 2018; Shu et al., 2019; Ippolito et al., 2019) . On the other hand, these methods rely on complex approaches. For example, modifying beam-search (Vijayakumar et al., 2018) , introducing rewriting patterns or sentence codes (Xu et al., 2018; Shu et al., 2019) or using post-decoding clustering (Ippolito et al., 2019) . However, we were curious if we can produce diverse outputs only using a simple approach.", "cite_spans": [ { "start": 38, "end": 59, "text": "(Mayhew et al., 2020)", "ref_id": "BIBREF5" }, { "start": 702, "end": 728, "text": "(Vijayakumar et al., 2018;", "ref_id": "BIBREF12" }, { "start": 729, "end": 745, "text": "Xu et al., 2018;", "ref_id": "BIBREF13" }, { "start": 746, "end": 763, "text": "Shu et al., 2019;", "ref_id": "BIBREF8" }, { "start": 764, "end": 786, "text": "Ippolito et al., 2019)", "ref_id": "BIBREF1" }, { "start": 885, "end": 911, "text": "(Vijayakumar et al., 2018)", "ref_id": "BIBREF12" }, { "start": 963, "end": 980, "text": "(Xu et al., 2018;", "ref_id": "BIBREF13" }, { "start": 981, "end": 998, "text": "Shu et al., 2019)", "ref_id": "BIBREF8" }, { "start": 1033, "end": 1056, "text": "(Ippolito et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Therefore, we aim to generate a variety of translations simply using generally adopted neural MT (NMT) methods. For that purpose, we use the models trained on the left-to-right (L2R) and right-toleft (R2L) directions, where L2R produces target sentences in a forward way and R2L will produce target sentences in a backward way as shown in Figure 1 . We then combine the output of L2R and back-reversed output of R2L to produce diverse translations. We adopt this approach based on the following reasons:", "cite_spans": [], "ref_spans": [ { "start": 339, "end": 347, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\u2022 No need to modify the NMT model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\u2022 Reversing only the target sentences is sufficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\u2022 It is known that L2R translates prefixes and R2L translates suffixes better (Liu et al., 2016) . This indicates that L2R and R2L produce different translation results, which may have an impact on the diversity of generated translations.", "cite_spans": [ { "start": 78, "end": 96, "text": "(Liu et al., 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our experiments, we show that even the combination of L2R and R2L translation results can produce a sufficiently diverse set of translations. In addition, we demonstrate that even though we use a simple approach, it is possible to generate varied paraphrased transcriptions which do not simply replace one word with another, contrarily, it utilizes different styles, opposition, word order etc. Our TMU system achieved the third place using only the simple approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Several models have been proposed to generate diverse decoding outputs for different tasks. For example, Xu et al. (2018) proposed diverse paraphrase generation by introducing rewriting patterns into the decoder of the encoder-decoder model. Vijayakumar et al. 2018proposed diverse beam search algorithm for decoding diverse sequences. They describe beam search as an optimization problem and augment the objective with a diversity term. They encouraged diversity between beams at each step by rewarding each group for spending its beam budget to explore different parts of the output space rather than repeatedly chasing sub-optimal beams from prior groups. They report their results on image captioning, visual question generation, and MT tasks. Shu et al. (2019) generated diverse translations by conditioning sentence generation with the sentence codes. They explored two methods: (a) semantic coding model which extracted sentence codes from unsupervisedly learned semantic information and (b) syntactic coding model which derived the sentence codes from the parse trees produced by a constituency parser. Ippolito et al. (2019) proposed the use of over-sampling followed by postdecoding clustering to remove similar sequences. They evaluated several techniques on an openended dialog task and image captioning task.", "cite_spans": [ { "start": 105, "end": 121, "text": "Xu et al. (2018)", "ref_id": "BIBREF13" }, { "start": 1111, "end": 1133, "text": "Ippolito et al. (2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "These works introduce different complex modifications to the model in order to achieve diversity while generating the output. However, in this paper, we show how to simply generate diverse outputs. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We used the open-source fairseq 3 (Ott et al., 2019) for training NMT models. We adopt the Transformer (Vaswani et al., 2017) as our translation model. We train two types of models, L2R and R2L for decoding. For L2R, we train a forward model in a traditional way. For R2L model, we first reverse the target sentences and train a model so it will produce the output from backward. Then the output of R2L is reversed again to forward direction. We exclude sentences from the translation results by normalizing the log probabilities of the hypothesis sentences by sentence length with less than -1.55 score. Then, the n-best translation results of each L2R and R2L are combined and if there is a duplication, one of the translations is removed.", "cite_spans": [ { "start": 34, "end": 52, "text": "(Ott et al., 2019)", "ref_id": "BIBREF6" }, { "start": 103, "end": 125, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": "3.1" }, { "text": "In our preliminary experiments, we found that the NMT model cannot produce sufficient quality translations using only the official data set. Therefore, we pre-train the NMT models with additional datasets, followed by fine-tuning with the STA-PLE dataset. Thus, we expect that a model learns general translation ability during pre-training and further learns to produce more diverse translation during fine-tuning. Table 1 lists some specific hyperparameters used in our experiments. For fine-tuning, we used the same values as we used for pre-training regarding the values that are not listed in the table. We trained four L2R models and four R2L models with different seeds on the same data, then ensembled all of them by taking the union of their outputs. We adjusted the hyperparameters using the development set, described in the next subsection. Table 2 summarizes the size of data used in our experiments for En\u2192Ja track. The official dataset of STAPLE contains multiple translations for a single prompt. We did not use the official development and test data in our experiments because the correct data with answers were not available to the public. Therefore, we randomly divided the official training data into training data and development data in prompt units as shown in Table 2 . We use OpenSubtitles 4 (Lison and Tiedemann, 2016) , Tatoeba 5 (Tiedemann, 2012) , TED 6 train and dev (Cettolo et al., 2012) terms of sentence length and data domain. We used STAPLE-train, OpenSubtitles, Tatoeba and TED-train as training data and STAPLE-dev, TEDdev and TED-test as development data for the pretraining. In fine-tuning, we used STAPLE-train as training data and STAPLE-dev as development data.", "cite_spans": [ { "start": 1317, "end": 1344, "text": "(Lison and Tiedemann, 2016)", "ref_id": "BIBREF3" }, { "start": 1357, "end": 1374, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF10" }, { "start": 1397, "end": 1419, "text": "(Cettolo et al., 2012)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 415, "end": 422, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 852, "end": 859, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1283, "end": 1291, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "System", "sec_num": "3.1" }, { "text": "We lowercased all the English data. English was tokenized using tokenizer.perl of Moses 7 (Koehn et al., 2007) and Japanese was tokenized using MeCab 8 with the IPA dictionary. After tokenization, we adopted sub-word segmentation mechanism (Sennrich et al., 2016) 9 . Note that, for the training of R2L, we first applied tokenization for the target sentences, then applied sub-word segmentation and then performed the reversing. The size of the sub-word vocabularies was set to 8,000. The sub-word vocabularies were constructed using pre-train training data.", "cite_spans": [ { "start": 90, "end": 110, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF2" }, { "start": 240, "end": 265, "text": "(Sennrich et al., 2016) 9", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.3" }, { "text": "your skirt is out of fashion. Output 1 \u3042\u306a\u305f\u306e\u30b9\u30ab\u30fc\u30c8\u306f\u6642\u4ee3\u9045\u308c\u3067\u3042\u308b\u3002 (Your skirt is outdated.) Output 2 \u3042\u306a\u305f\u306e\u30b9\u30ab\u30fc\u30c8\u306f\u6d41\u884c\u3057\u3066\u3044\u306a\u3044\u3002 (Your skirt is not in fashion.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source", "sec_num": null }, { "text": "they give me water. Output 1 \u5f7c\u5973\u3089\u306f\u79c1\u306b\u6c34\u3092\u304f\u308c\u308b\u3002 (They give me water.) Output 2 \u79c1\u306f\u5f7c\u3089\u304b\u3089\u6c34\u3092\u3082\u3089\u3044\u307e\u3059\u3002 (I get water from them.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source", "sec_num": null }, { "text": "she found another path. Output 1 \u5f7c\u5973\u306f\u9055\u3046\u9053\u3092\u898b\u3064\u3051\u305f\u3002 (She found a different path.) Output 2 \u5f7c\u5973\u306f\u5225\u306e\u9053\u3092\u898b\u3064\u3051\u305f\u308f (She found another way) Output 3 \u5f7c\u5973\u306f\u5225\u306e\u9053\u3092\u898b\u3064\u3051\u305f\u3088 (She found another way) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source", "sec_num": null }, { "text": "We used weighted macro F1 as the main scoring metric (Mayhew et al., 2020) . The system is scored based on its ability to return all acceptable human-made translations, weighted by the likelihood that the learner will respond to each translation. The weighted macro F1 calculates a weighted F1 for each prompt and takes the average of all the prompts in the corpus. Table 3 lists the F1 scores of participating systems in En\u2192Ja track. Our TMU system was ranked the third.", "cite_spans": [ { "start": 53, "end": 74, "text": "(Mayhew et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 366, "end": 373, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "3.4" }, { "text": "We investigate whether decoding in opposite directions contribute to diversity in translation outputs. We compare the results for development set generated with beam size of 64 in one model (L2R, R2L, Single seed 1, Single seed 2) to those generated and combined two models (L2R & R2L, Multi seed) with beam size of 32. As a baseline, we also experiment with different seeds and examine their efficiency. This allows us to see how the direction or seed contribute to the diversity of translation. Table 4 shows the results for top-2 single seeds models in terms of performance and multi seed model, and the best L2R, R2L, and L2R & R2L models. The results show that using multiple seeds leads to higher F1 scores, however, the improvement is not critical. On the other hand, L2R & R2L improved weighted F1 scores for 1.0 points. Therefore, we show that it is important to combine the outputs of the two directions. Table 5 demonstrates the example of diverse translations generated by the combination of four ensemble L2R and four ensemble R2L models' outputs. Here we sampled the outputs from development set. The first example illustrates how our system uses negation to express the same meaning translations of the source sentence. The second example changed the syntax by using benefactive verbs for the output while preserving the same meaning and grammatical correctness. The third example uses different styles, which are specific for Japanese language, to introduce diversity.", "cite_spans": [], "ref_spans": [ { "start": 497, "end": 504, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 915, "end": 922, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Does translation in opposite directions contribute to a diverse translation?", "sec_num": "4.1" }, { "text": "Therefore, we can conclude that even using simple approach we can achieve diverse, grammatically correct translations without changing the meaning of the input sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples of Translations", "sec_num": "4.2" }, { "text": "In this paper, we introduced our system submitted to WNGT2020 shared task to En\u2192Ja track on STAPLE. We have shown that even a simple method which uses only forward and backward models' outputs can generate a variety of translations while maintaining original meaning and grammaticality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In future, we plan to compare our system with existing systems that perform different types of language generation. In addition, we will investigate the impact of L2R and R2L models to the diverse output in depth.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://sharedtask.duolingo.cites.google. com/view/wngt20/home 2 https://sharedtask.duolingo.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/pytorch/fairseq", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://opus.nlpl.eu/OpenSubtitles-v2018. php 5 http://opus.nlpl.eu/Tatoeba.php 6 https://wit3.fbk.eu", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/moses-smt/mosesdecoder 8 http://taku910.github.io/mecab 9 https://github.com/rsennrich/subword-nmt", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "WIT 3 : Web Inventory of Transcribed and Translated Talks", "authors": [ { "first": "Mauro", "middle": [], "last": "Cettolo", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Girardi", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2012, "venue": "EAMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. WIT 3 : Web Inventory of Transcribed and Translated Talks. In EAMT.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Comparison of Diverse Decoding Methods from Conditional Language Models", "authors": [ { "first": "Daphne", "middle": [], "last": "Ippolito", "suffix": "" }, { "first": "Reno", "middle": [], "last": "Kriz", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Kustikova", "suffix": "" }, { "first": "Jo\u00e3o", "middle": [], "last": "Sedoc", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daphne Ippolito, Reno Kriz, Maria Kustikova, Jo\u00e3o Se- doc, and Chris Callison-Burch. 2019. Comparison of Diverse Decoding Methods from Conditional Lan- guage Models. In ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Moses: Open Source Toolkit for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" } ], "year": 2007, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "OpenSubti-tles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles", "authors": [ { "first": "Pierre", "middle": [], "last": "Lison", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2016, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. OpenSubti- tles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In LREC.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Agreement on Targetbidirectional Neural Machine Translation", "authors": [ { "first": "Lemao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Andrew", "middle": [ "M" ], "last": "Finch", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2016, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lemao Liu, Masao Utiyama, Andrew M. Finch, and Eiichiro Sumita. 2016. Agreement on Target- bidirectional Neural Machine Translation. In HLT- NAACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Simultaneous Translation And Paraphrase for Language Education", "authors": [ { "first": "S", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "K", "middle": [], "last": "Bicknell", "suffix": "" }, { "first": "C", "middle": [], "last": "Brust", "suffix": "" }, { "first": "B", "middle": [], "last": "Mcdowell", "suffix": "" }, { "first": "W", "middle": [], "last": "Monroe", "suffix": "" }, { "first": "B", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2020, "venue": "WNGT@ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Mayhew, K. Bicknell, C. Brust, B. McDowell, W. Monroe, and B. Settles. 2020. Simultaneous Translation And Paraphrase for Language Educa- tion. In WNGT@ACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "fairseq: A Fast, Extensible Toolkit for Sequence Modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "NAACL-HLT: Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A Fast, Extensible Toolkit for Sequence Modeling. In NAACL-HLT: Demonstrations.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Neural Machine Translation of Rare Words with Subword Units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Generating Diverse Translations with Sentence Codes", "authors": [ { "first": "Raphael", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Nakayama", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raphael Shu, Hideki Nakayama, and Kyunghyun Cho. 2019. Generating Diverse Translations with Sen- tence Codes. In ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Rethinking the Inception Architecture for Computer Vision", "authors": [ { "first": "Christian", "middle": [], "last": "Szegedy", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Vanhoucke", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Ioffe", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "Zbigniew", "middle": [], "last": "Wojna", "suffix": "" } ], "year": 2016, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the Inception Architecture for Computer Vision. In CVPR.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Parallel Data, Tools and Interfaces in OPUS", "authors": [ { "first": "Jorg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jorg Tiedemann. 2012. Parallel Data, Tools and Inter- faces in OPUS. In LREC.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Attention is All you Need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In NeurIPS.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Diverse beam search for improved description of complex scenes", "authors": [ { "first": "K", "middle": [], "last": "Ashwin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Vijayakumar", "suffix": "" }, { "first": "Ramprasaath", "middle": [ "R" ], "last": "Cogswell", "suffix": "" }, { "first": "", "middle": [], "last": "Selvaraju", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Qing He Sun", "suffix": "" }, { "first": "David", "middle": [ "J" ], "last": "Lee", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Crandall", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashwin K. Vijayakumar, Michael Cogswell, Ram- prasaath R. Selvaraju, Qing He Sun, Stefan Lee, David J. Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. In AAAI.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "D-PAGE: Diverse Paraphrase Generation", "authors": [ { "first": "Qiongkai", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Juyan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lizhen", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Lexing", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Nock", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiongkai Xu, Juyan Zhang, Lizhen Qu, Lexing Xie, and Richard Nock. 2018. D-PAGE: Diverse Para- phrase Generation. ArXiv.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Architecture of TMU system.", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "text": "", "content": "
Pre-train
Model ArchitectureTransformer-big
Number of epochs20
Max tokens4,096
OptimizerAdam
(\u03b2 1 = 0.9, \u03b2 2 = 0.98,
\u03f5 = 1 \u00d7 10 \u22128 )
Learning rate5 \u00d7 10 \u22124
Learning rate scheduleinverse sqrt
Warmup updates4,000
Min learning rate1 \u00d7 10 \u22129
Loss functionlabel smoothed cross-entropy
(\u03f5 ls = 0.1)
(Szegedy et al., 2016)
Dropout0.3
Gradient Clipping0.1
Fine-tuning
Number of epochs10
Learning rate3 \u00d7 10 \u22125
Learning rate schedulefixed
Translation
Beam size64
Ensemble4
", "type_str": "table", "html": null, "num": null }, "TABREF2": { "text": "Statistics on official STAPLE data and data used in our experiments. For STAPLE data, the left side indicates the number of prompts and the right side indicates the total number of sentences contained in each prompt.", "content": "", "type_str": "table", "html": null, "num": null }, "TABREF4": { "text": "The official results on the test set for En\u2192Ja in terms of weighted F1.", "content": "
ModelF1
Single seed 1 23.7
Single seed 2 23.4
Multi seed23.9
L2R23.7
R2L23.2
L2R & R2L24.7
", "type_str": "table", "html": null, "num": null }, "TABREF5": { "text": "The result for each model in terms of weighted F1 on the development set.", "content": "", "type_str": "table", "html": null, "num": null }, "TABREF6": { "text": "Examples generated by the combination of four ensemble L2R and four ensemble R2L models' outputs using the development set. () indicate their English translation. The English translation of the third example can not fully represent the change of styles used in Japanese language output.", "content": "
", "type_str": "table", "html": null, "num": null } } } }