{ "paper_id": "I17-1039", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:38:57.648452Z" }, "title": "Towards Neural Machine Translation with Partially Aligned Corpora", "authors": [ { "first": "Yining", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "CASIA", "location": { "settlement": "Beijing", "country": "China" } }, "email": "yining.wang@nlpr.ia.ac.cn" }, { "first": "Yang", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "CASIA", "location": { "settlement": "Beijing", "country": "China" } }, "email": "yang.zhao@nlpr.ia.ac.cn" }, { "first": "Jiajun", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "CASIA", "location": { "settlement": "Beijing", "country": "China" } }, "email": "jjzhang@nlpr.ia.ac.cn" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "CASIA", "location": { "settlement": "Beijing", "country": "China" } }, "email": "cqzong@nlpr.ia.ac.cn" }, { "first": "Zhengshan", "middle": [], "last": "Xue", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toshiba (China) Co.,Ltd", "location": {} }, "email": "xuezhengshan2@toshiba.com.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "While neural machine translation (NMT) has become the new paradigm, the parameter optimization requires large-scale parallel data which is scarce in many domains and language pairs. In this paper, we address a new translation scenario in which there only exists monolingual corpora and phrase pairs. We propose a new method towards translation with partially aligned sentence pairs which are derived from the phrase pairs and monolingual corpora. To make full use of the partially aligned corpora, we adapt the conventional NMT training method in two aspects. On one hand, different generation strategies are designed for aligned and unaligned target words. On the other hand, a different objective function is designed to model the partially aligned parts. The experiments demonstrate that our method can achieve a relatively good result in such a translation scenario, and tiny bitexts can boost translation quality to a large extent.", "pdf_parse": { "paper_id": "I17-1039", "_pdf_hash": "", "abstract": [ { "text": "While neural machine translation (NMT) has become the new paradigm, the parameter optimization requires large-scale parallel data which is scarce in many domains and language pairs. In this paper, we address a new translation scenario in which there only exists monolingual corpora and phrase pairs. We propose a new method towards translation with partially aligned sentence pairs which are derived from the phrase pairs and monolingual corpora. To make full use of the partially aligned corpora, we adapt the conventional NMT training method in two aspects. On one hand, different generation strategies are designed for aligned and unaligned target words. On the other hand, a different objective function is designed to model the partially aligned parts. The experiments demonstrate that our method can achieve a relatively good result in such a translation scenario, and tiny bitexts can boost translation quality to a large extent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural machine translation (NMT) proposed by Kalchbrenner et al.(2013) , Sutskever et al.(2014) and Cho et al.(2014) has achieved significant progress in recent years. Different from traditional statistical machine translation(SMT) (Koehn et al., 2003; Chiang, 2005; Liu et al., 2006; Zhai et al., 2012) which contains multiple separately tuned components, NMT builds an end-to-end framework to model the whole translation process. For several language pairs, NMT is reaching significantly better translation performance than SMT (Luong et al., 2015b; .", "cite_spans": [ { "start": 45, "end": 70, "text": "Kalchbrenner et al.(2013)", "ref_id": "BIBREF11" }, { "start": 73, "end": 95, "text": "Sutskever et al.(2014)", "ref_id": "BIBREF27" }, { "start": 100, "end": 116, "text": "Cho et al.(2014)", "ref_id": "BIBREF5" }, { "start": 232, "end": 252, "text": "(Koehn et al., 2003;", "ref_id": "BIBREF13" }, { "start": 253, "end": 266, "text": "Chiang, 2005;", "ref_id": "BIBREF4" }, { "start": 267, "end": 284, "text": "Liu et al., 2006;", "ref_id": "BIBREF16" }, { "start": 285, "end": 303, "text": "Zhai et al., 2012)", "ref_id": "BIBREF31" }, { "start": 530, "end": 551, "text": "(Luong et al., 2015b;", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In general, in order to obtain an NMT model \u5916\u4ea4\u90e8\u53d1\u8a00\u4eba\u5218\u5efa\u8d85\u4eca\u5929\u5728\u4f8b\u884c\u7684\u673a\u5236\u62db\u5f85\u4f1a\u4e0a\u8bf4\uff0c \u7f8e\u56fd\u53f8\u6cd5\u90e8\u957f\u6765\u5317\u4eac\u8fdb\u884c\u4e86\u8bbf\u95ee\u3002 Speaking at a regular press briefing Wednesday, Turkish foreign ministry deputy spokesman said that turkey hoped the peace talks would continue as planned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Partial Aligned Part 1 Figure 1 : An example of our partially aligned training data, in which the source sentence and target sentence are not parallel but they include two parallel parts (highlight in blue and red respectively).", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 31, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Partial Aligned Part 2", "sec_num": null }, { "text": "of great translation quality, we usually need largescale parallel data. Unfortunately, the large-scale parallel data is always insufficient in many domains and language pairs. Without sufficient parallel sentence pairs, NMT tends to learn poor estimates on low-count events. Actually, there have been some effective methods to deal with the situation of translating language pairs with limited resource under different scenarios (Johnson et al., 2016; Sennrich et al., 2016a; . In this paper, we address a new translation scenario in which we do not have any parallel sentences but have massive monolingual corpora and phrase pairs. The previous methods are hard to be used to learn an NMT model under this situation. In this paper, we propose a novel method to learn an NMT model using only monolingual data and phrase pairs.", "cite_spans": [ { "start": 429, "end": 451, "text": "(Johnson et al., 2016;", "ref_id": "BIBREF10" }, { "start": 452, "end": 475, "text": "Sennrich et al., 2016a;", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Partial Aligned Part 2", "sec_num": null }, { "text": "Our main idea is that although there does not exist the parallel sentences, we can derive the sentence pairs which are non-parallel but contain the parallel parts (in this paper, we call these sentences as partially aligned sentences) with the monolingual data and phrase pairs. Then we can utilize these partially aligned sentences to train an NMT model. Figure 1 shows an example of our data. Source sentence and target sentence are not fully aligned but contain two translation fragments: (\"\u5916\u4ea4\u90e8\u53d1\u8a00\u4eba\", \"foreign ministry deputy\") and (\"\u5728 \u4f8b \u884c \u7684 \u8bb0 \u8005 \u62db \u5f85 \u4f1a \u4e0a \u8bf4\", \"speaking at a regular press\"). Intuitively, these kinds of sentence pairs are useful in building an NMT model.", "cite_spans": [], "ref_spans": [ { "start": 356, "end": 364, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Partial Aligned Part 2", "sec_num": null }, { "text": "To use these partially aligned sentences, the training method should be different from the original methods which are designed for parallel corpora. In this work, we adapt the conventional NMT training method mainly from two perspectives. On one hand, different generation strategies are designed for aligned and unaligned target words. For aligned words, our method guides the translation process based on both the context of source side and previously predicted words. When generating the unaligned target words , our model only depends on the words previously generated without considering the context of source side. On the other hand, we redesign the objective function so as to emphasize the partially aligned parts in addition to maximizing the log-likelihood of the target sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Aligned Part 2", "sec_num": null }, { "text": "The contributions of our paper are twofold: 1) Our approach addresses a new translation scenario, where there only exists monolingual data and phrase pairs. We propose a method to train an NMT model under this scenario. The method is simple and easy to implement, which can be used in arbitrary attention-based NMT framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Aligned Part 2", "sec_num": null }, { "text": "2) Empirical experiments on the Chinese-English translation tasks under this scenario show that our method can achieve a relatively good result. Moreover, if we only add a tiny parallel corpus, the method can obtain significant improvements in terms of translation quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Aligned Part 2", "sec_num": null }, { "text": "Our approach can be easily applied to any endto-end attention-based NMT framework. In this work, we follow the neural machine translation architecture by Bahdanau et al. (2015) , which we will summarize in this section. Given the source sentence X = {x 1 , x 2 , ..., x T x } and the target sentence Y = {y 1 , y 2 , ..., y T y }. The goal of machine translation is to transform source sentence into the target sentence. The end-to-end NMT framework consists of two recurrent neural networks, which are respectively called encoder and decoder. First, the encoder network encodes the X into context vectors C. Then, the decoder network generates the target translation sentences one word each time based on the context vectors C and target words previously generated. More specifically, that is p(y i |y