ACL-OCL / Base_JSON /prefixN /json /ngt /2020.ngt-1.27.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:51.867994Z"
},
"title": "Improving Document-Level Neural Machine Translation with Domain Adaptation",
"authors": [
{
"first": "Sami",
"middle": [
"Ul"
],
"last": "Haq",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Sciences and Technology",
"location": {
"country": "Pakistan"
}
},
"email": "sami.ulhaq@ceme.nust.edu.pk"
},
{
"first": "Sadaf",
"middle": [
"Abdul"
],
"last": "Rauf",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fatima Jinnah Women University",
"location": {
"country": "Pakistan"
}
},
"email": "sadaf.abdulrauf@gmail.com"
},
{
"first": "Arslan",
"middle": [],
"last": "Shoukat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Sciences and Technology",
"location": {
"country": "Pakistan"
}
},
"email": "arslanshaukat@ceme.nust.edu.pk"
},
{
"first": "Noor",
"middle": [],
"last": "-E-Hira",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fatima Jinnah Women University",
"location": {
"country": "Pakistan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent studies have shown that translation quality of NMT systems can be improved by providing document-level contextual information. In general sentence-based NMT models are extended to capture contextual information from large-scale document-level corpora which are difficult to acquire. Domain adaptation on the other hand promises adapting components of already developed systems by exploiting limited in-domain data. This paper presents FJWU's system submission at WNGT, we specifically participated in Document level MT task for German-English translation. Our system is based on context-aware Transformer model developed on top of original NMT architecture by integrating contextual information using attention networks. Our experimental results show that providing previous sentences as context significantly improves the BLEU score as compared to a strong NMT baseline. We also studied the impact of domain adaptation on document level translation and were able to improve results by adapting the systems according to the testing domain.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent studies have shown that translation quality of NMT systems can be improved by providing document-level contextual information. In general sentence-based NMT models are extended to capture contextual information from large-scale document-level corpora which are difficult to acquire. Domain adaptation on the other hand promises adapting components of already developed systems by exploiting limited in-domain data. This paper presents FJWU's system submission at WNGT, we specifically participated in Document level MT task for German-English translation. Our system is based on context-aware Transformer model developed on top of original NMT architecture by integrating contextual information using attention networks. Our experimental results show that providing previous sentences as context significantly improves the BLEU score as compared to a strong NMT baseline. We also studied the impact of domain adaptation on document level translation and were able to improve results by adapting the systems according to the testing domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In past few years, machine translation systems have witnessed remarkable growth due to increasing amount of multilingual information. Neural Machine Translation (NMT) has become one of the powerful and de-facto approaches recognized for its generality and effectiveness (Li et al., 2018) . Due to better accuracy of deep neural models, it has quickly achieved state of the art performance in machine translation (Shen et al., 2015) .",
"cite_spans": [
{
"start": 270,
"end": 287,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 412,
"end": 431,
"text": "(Shen et al., 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Standard neural machine translation model works on individual sentences and focuses on short context windows for improving translation quality while ignoring cross-sentence links and dependencies (Xiong et al., 2019) . Sentence-by-sentence translation of well-formed documents may generate an incoherent target text which is unable to span the entire document. This largely limits the success of NMT, as document context is totally ignored. Intuitively, to generate coherent translation of source document, machine learning models expect cross-sentence dependencies and linkages. To this end, several models (Voita et al., 2018; Wang et al., 2017; Tu et al., 2018; Maruf and Haffari, 2017; Bawden et al., 2017; have been proposed for document-wide translation.",
"cite_spans": [
{
"start": 196,
"end": 216,
"text": "(Xiong et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 608,
"end": 628,
"text": "(Voita et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 629,
"end": 647,
"text": "Wang et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 648,
"end": 664,
"text": "Tu et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 665,
"end": 689,
"text": "Maruf and Haffari, 2017;",
"ref_id": "BIBREF16"
},
{
"start": 690,
"end": 710,
"text": "Bawden et al., 2017;",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Adapting NMT models for context-aware translations has the biggest challenge of limited availability of bilingual document-level corpora. Since, only few resources are available for training, the application domain of NMT may greatly vary from domain of training data. Consequently, the performance of NMT system may quickly degrade as soon as the testing conditions deviate from training conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Domain adaptation has been an active research topic in the field of machine translation to improve translation performance often for low resource settings (Koehn and Schroeder, 2007) . The quality of neural machine translation heavily depends upon domain-specificity of test data and the amount of parallel training data. The demand for high quality domain specific MT systems has significantly increased over the years but the bilingual corpora for relevant languages still lack in quantity (Chu and Wang, 2018) .",
"cite_spans": [
{
"start": 155,
"end": 182,
"text": "(Koehn and Schroeder, 2007)",
"ref_id": "BIBREF12"
},
{
"start": 492,
"end": 512,
"text": "(Chu and Wang, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we aim to demonstrate performance optimization in a particular domain by training document-level models on large out-of domain parallel corpus combined with small in-domain corpus using domain adaptation techniques. Our experiments on German-English data using documentlevel translation model (Miculicich et al., 2018) an extension of standard NMT Transformer (Vaswani et al., 2017) reveals the importance of contextual information and domain adaptation on translation quality.",
"cite_spans": [
{
"start": 307,
"end": 332,
"text": "(Miculicich et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 374,
"end": 396,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "By comparing the performance with standard NMT baseline models trained on bilingual data, we show that NMT models exposed to random and actual contextual information are more sensitive to translation quality. We also demonstrate the impact of domain adaptation on translation quality by adapting document-level system to the testing domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use two types of models, sentence level and document-level context aware model, both built using Transformer architecture (Vaswani et al., 2017) . For our primary submission we train documentlevel models on parallel corpus with document boundaries. The document level model consists of hierarchical attention encoder and decoder to capture both source and target side contextual information during training and testing.",
"cite_spans": [
{
"start": 125,
"end": 147,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "2"
},
{
"text": "Our baseline for sentence-level models is OpenNMT-py (Klein et al., 2017) implementation of the Transformer architecture. To be able to establish comparison with document-level models, strong sentence-level baseline is defined with the same architecture (Vaswani et al., 2017) and training data as used for document-level models. Model configurations and training/evaluation data for sentence-level models are discussed in section 3.2.",
"cite_spans": [
{
"start": 53,
"end": 73,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 254,
"end": 276,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-level Models",
"sec_num": "2.1"
},
{
"text": "The motivation behind this research work is to test document-level NMT models on sports domain for WNGT20 shared task. The standard Transformer encoder and decoder are extended to take additional sentences as contextual input (Miculicich et al., 2018) . Hierarchical attention networks (Yang et al., 2016) are employed on both sides of the NMT model to capture larger context. HAN encoder and decoder can be used jointly to provide dynamic access for selecting previous sentences or predicting most appropriate words.",
"cite_spans": [
{
"start": 226,
"end": 251,
"text": "(Miculicich et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 286,
"end": 305,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Models",
"sec_num": "2.2"
},
{
"text": "The baseline and document-level models are trained on English-German parallel data of differ-ent domains (i.e. news, press and sports) provided by WMT19 1 and WNGT20 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "Split Sentences Documents ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "Document level models require parallel data with document boundaries for training and testing. Parallel data without document boundaries can not be directly used to train document level models, if it is imperative to use, then artificial document boundaries need to be generated. Our training corpus is also constrained to use only English-German data from the WMT19 shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "As mentioned earlier, one of the main constraints in training document-level models is limited availability of document-level corpora. WMT19 provides document split version of Europarl v9, New-Commentary v14 and Rapid corpus. Rotowire dataset, made available by WNGT20 DGT task also contains parallel data with document distinctions. For document-parallel data, we preserve the document boundaries during data filtering and concatenation as to get original documents back after translation. For DGT shared task submission, Rotowire test set is provided by WNGT20. Since, the models are trained on data from multiple domains, we create standard test set by selecting chunk of data from each domain to generate a more fair representation of each domain in standard test set. This was done by selecting multiple documents from a particular domain based on size of the dataset 3 . All datasets are tokenised using script provided by WNGT organizers 4 . Table 1 summarizes the corpus details.",
"cite_spans": [],
"ref_spans": [
{
"start": 949,
"end": 956,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "RotoWire (Wiseman et al., 2017) is sports data consisting of article summaries about NBA basket ball games. RotoWire dataset is available in two formats, json and plain text. Both formats contain identical split for train/development and test sets. We used plain text format that contains separate files according to IDs of documents, each game summary is taken as a separate document.",
"cite_spans": [
{
"start": 9,
"end": 31,
"text": "(Wiseman et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In-domain data",
"sec_num": "3.1.1"
},
{
"text": "Major portion of training data includes out-ofdomain parallel corpora taken from WMT19. We used English-German set of Rapid, Newscommentary and Europarl with document boundaries. Document boundaries of Europarl v9 dataset resulted in very long documents, therefore we decided to redefine the document boundaries while keeping the same order of sentences 5 . For this, we take the average document size of Rotowire training data which gave us 14 sentences per document. After discarding original space split document boundaries from Europarl v9, we add new boundaries to keep a reasonable size of context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-domain data",
"sec_num": "3.1.2"
},
{
"text": "Since our baseline and document-level MT systems use OpenNMT-py (Klein et al., 2017) implementation of Transformer model, we used similar configuration parameters are as reported in original transformer paper (Vaswani et al., 2017) . Transformer model incorporates 6-hidden layers for encoder and decoder. All the hidden states have dropout of 0.1 and 512 dimensions. Model is trained with 8000 warm-up steps with a learning rate of 0.01. We checkpoint the model every 1000 steps for validation. Batch size is set to 2048 and modes are trained for 50K steps.",
"cite_spans": [
{
"start": 64,
"end": 84,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 209,
"end": 231,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configuration and Training",
"sec_num": "3.2"
},
{
"text": "As in the original paper (Miculicich et al., 2018) , two step process is followed for training the document-level models. In the first step, NMT model is optimized without context-aware HAN. After that, we optimize the parameter's for HAN encoder, decoder and joint model. HAN Transformer models gave best performance for 1-3 previous sentences, we use k=3 previous sentences for both source and target side context.",
"cite_spans": [
{
"start": 25,
"end": 50,
"text": "(Miculicich et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configuration and Training",
"sec_num": "3.2"
},
{
"text": "We present the results of experimentation from our models on German-English translation in Tables 2, 3 and 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "In our initial experiments, we investigate the impact of domain adaptation on translation results at sentence-level. Since the NMT models are adapted for sports domain (RotoWire), so following (Hira et al., 2019) we gave more weightage to RotoWire corpus by replicating the corpus twice and thrice to study the impact. The results in Table 2 are reported on NMT model scores for Rotowire (roto), Rapid (rap) and News-Commentary (nc) corpus, here roto is the in-domain corpus. Adding only 0.2M of in-domain roto corpus to 30M rap German corpus yields a substantial improvement of around +4 BLEU points (Table 2: row 2). On the other hand, addition of 5.6M German nc corpus to previous systems, gives around +1.5 points improvement (row 3). This is an obvious demonstration of the positive effect of domain adaptation on translation quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 334,
"end": 341,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Domain adaptation: Sentence level",
"sec_num": "4.1"
},
{
"text": "We further explore this effect by replicating twice the roto corpus, this gives a big improvement of 8.43 BLEU points on roto2 \u2212 rap \u2212 nc (Table 2: row 4). Replicating roto 3 times, however gives +3.68 points improvement from previous system and an overall improvements of +12.11 BLEU points from roto \u2212 rap \u2212 nc. Clearly, by adapting to the testing domain by updating the model weights, substantial improvements are achieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain adaptation: Sentence level",
"sec_num": "4.1"
},
{
"text": "For document-level models, taking a strong baseline of sentence level models, we achieved remarkable improvements by incorporating context as shown in Table 3 . We report the results for 4 corpus combinations. . This is achieved by incorporation of contextual information by HAN encoder. We have used a context of 3 sentences as this was reported to be the best for capturing context by (Miculicich et al., 2018) . This shows a superiority of context based models on standard NMT models. With the joint model we get a score of 38.32 BLEU points.",
"cite_spans": [
{
"start": 387,
"end": 412,
"text": "(Miculicich et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Document-level Adaptation with Context Aware Translation Experiments",
"sec_num": "4.2"
},
{
"text": "In columns (3, 4) of Table 3 , the effect of domain adaptation to test domain is reported on documentlevel systems. This years test data were the documents in the test folder provided by the organisers. We experimented by building systems by replicating the Rotowire training corpus, twice and thrice in an attempt to enable the translation model to learn parameter closer to the testing domain. We can clearly see that models with replicated in-domain corpus outperformed and achieved better score than previous document-level and sentence-based models. The best score of 43.08 is obtained when Rotowire is replicated thrice i.e. RotoW ire( * 3) + Rapid + N C + Euro.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Document-level Adaptation with Context Aware Translation Experiments",
"sec_num": "4.2"
},
{
"text": "All of our document-level models preformed better than sentence-level models but most importantly encoder models gave best scores which clearly indicates that source side provides correct contextual information as compared to target side. The highest score is achieved by combining HAN encoder and HAN decoder model for corpus in second and third two columns. Joint model for last column performed poorly, this can be attributed to the fact that HAN decoder is not contributing complementary information to further improve translations. Another reason can be our selection of decoder's context, as due to limited availability of time we only use decoded states of previous sentences for target side context while other configurations (Miculicich et al., 2018) are also available. Table 4 presents results from our different context integration experiments. We have been interested to check how much improvement is due to additional contextual information, therefore we created a similar setup (Scherrer et al., 2019) for analysis of context. For this, we create three variants of our train and test set to evaluate context aware systems:",
"cite_spans": [
{
"start": 734,
"end": 759,
"text": "(Miculicich et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 993,
"end": 1016,
"text": "(Scherrer et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 780,
"end": 787,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Document-level Adaptation with Context Aware Translation Experiments",
"sec_num": "4.2"
},
{
"text": "\u2022 Regular context: The order of sentences in train and test set is kept same as they appear in original documents to evaluate consistent contextual setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Context Integration Experiments",
"sec_num": "4.3"
},
{
"text": "\u2022 Random context: The train and test set is shuffled such that the document boundaries now represent inconsistent contextual sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Context Integration Experiments",
"sec_num": "4.3"
},
{
"text": "\u2022 No context: Document boundaries of test set is modified such that one sentence now presents one document, which means no additional context is made available during translations. We are forcing document level model to avoid context by providing single sentence document during testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Context Integration Experiments",
"sec_num": "4.3"
},
{
"text": "We report Blue score for context integration experiments in Table 4 . Document-level models are expected to perform better when contextual information is available. The BLEU score decreases when we move from regular context to random and no context as indicated by row 1 and column 3-5 of Table 4 . Context aware models when trained on inconsistent training data, it hardly effects their performance when actual context is random or missing during testing. Model in row 2 is trained on random or inconsistent document level data, column 3-5 represent scores for regular, random and noncontextual test data. Model trained on data with random context is insensitive to context during testing.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 4",
"ref_id": null
},
{
"start": 289,
"end": 296,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Other Context Integration Experiments",
"sec_num": "4.3"
},
{
"text": "5.1 Context-aware NMT Improving machine translation systems by developing document-level models for SMT (Garcia et al., 2015; Hardmeier et al., 2013; Gong et al., 2011) and NMT (Maruf and Haffari, 2017; Tu et al., 2018; Voita et al., 2018; Kuang et al., 2017; Wang et al., 2017) has been an important research area. These contributions are briefly discussed in this section. Table 4 : BLEU for EN\u21d2DE translations using Regular, Random and None contextual settings of corpus. (Miculicich et al., 2018) proposed documentlevel approach with Hierarchical Attention Network (HAN) to provide contextual information during translation. Two HANs are considered for integrating source and target context in NMT. HAN are believed to provide dynamic access to contextual information as compared to Hierarchical Recurrent Neural Networks (HRNN). However, the approach is restrictive for incorporating large contextual information by only considering a limited number of previous source/target sentences.",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "(Garcia et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 126,
"end": 149,
"text": "Hardmeier et al., 2013;",
"ref_id": "BIBREF7"
},
{
"start": 150,
"end": 168,
"text": "Gong et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 177,
"end": 202,
"text": "(Maruf and Haffari, 2017;",
"ref_id": "BIBREF16"
},
{
"start": 203,
"end": 219,
"text": "Tu et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 220,
"end": 239,
"text": "Voita et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 240,
"end": 259,
"text": "Kuang et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 260,
"end": 278,
"text": "Wang et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 475,
"end": 500,
"text": "(Miculicich et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 375,
"end": 382,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Cache-based memory approach is proposed by (Tu et al., 2018) to provide document context during translation. Memory networks keep the representation of a set of words in cache to provide contextual information to NMT in the form of words. However, the stored representations are considered irrespective of sentences in which they occur and do not provide actual context to NMT. Cache based memory models have been used in both SMT (Gong et al., 2011) and NMT to store rich representations of source and target text. (Kuang et al., 2017) use two caches, dynamic cache to capture dynamic context by storing words of translated sentence and topic cache which stores topical words of target side from entire document. Through a gating mechanism, the probability of NMT model and cache based neural model is combined to predict the next word.",
"cite_spans": [
{
"start": 43,
"end": 60,
"text": "(Tu et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 431,
"end": 450,
"text": "(Gong et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 516,
"end": 536,
"text": "(Kuang et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Memory network-based approach presented by (Maruf and Haffari, 2017) is used to integrate global source and target context to sentence-based NMT. Keeping the source and target context in memory can be very time consuming and mem-ory inefficient as the sentence pairs in document could be enormous. Another study by (Xiong et al., 2019) is based on deliberation networks to capture the cross-sentence context by improving the translation of baseline NMT system in the second pass. Generation of discourse coherent output is largely dependent upon the performance of the canonical NMT model.",
"cite_spans": [
{
"start": 43,
"end": 68,
"text": "(Maruf and Haffari, 2017)",
"ref_id": "BIBREF16"
},
{
"start": 315,
"end": 335,
"text": "(Xiong et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The approach proposed by implemented with document-level context outperforms existing cache based RNN search model. Extending Transformer model has achieved better context awareness and a low computational overhead. (Voita et al., 2018) introduce a context aware NMT model in which they control and analyze the flow of information from the extended context to the translation model. They show that using the previous sentence as context their model is able to implicitly capture anaphora.",
"cite_spans": [
{
"start": 216,
"end": 236,
"text": "(Voita et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Both source and target side contextual information plays important role in document-level translation. Inspired from previous sentence-based context-aware approaches (Voita et al., 2018; Stojanovski and Fraser, 2019; , we are using extended Transformer model (Miculicich et al., 2018) with ability to use dynamic context for document-level experiments.",
"cite_spans": [
{
"start": 166,
"end": 186,
"text": "(Voita et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 187,
"end": 216,
"text": "Stojanovski and Fraser, 2019;",
"ref_id": "BIBREF21"
},
{
"start": 259,
"end": 284,
"text": "(Miculicich et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The basic concept behind domain adaptation in NMT is utilizing large amount of available parallel data for training NMT models and adapting these to novel domains with small in-domain data (Freitag and Al-Onaizan, 2016) . In the simplest approach, in-domain data can be used to fine-tune models trained on large-scale out-of-domain data. Training NMT models from scratch on combined data can take several weeks and may suffer from performance degradation on in-domain test data.",
"cite_spans": [
{
"start": 189,
"end": 219,
"text": "(Freitag and Al-Onaizan, 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "5.2"
},
{
"text": "Fine-tuning is a fast and efficient method to integrate in-domain data, and does not need building systems from scratch. Fine tuning for NMT (Dakwale and Monz, 2017; Freitag and Al-Onaizan, 2016; Luong and Manning, 2015) is achieved by further training a neural model on in-domain data which is already trained on large general domain training data. Adoption to new domain is achieved by (Sennrich et al., 2015) by using synthetic data through back-translation of target in-domain monolingual text and retraining on combined training corpus by adding new data.",
"cite_spans": [
{
"start": 141,
"end": 165,
"text": "(Dakwale and Monz, 2017;",
"ref_id": "BIBREF3"
},
{
"start": 166,
"end": 195,
"text": "Freitag and Al-Onaizan, 2016;",
"ref_id": "BIBREF4"
},
{
"start": 196,
"end": 220,
"text": "Luong and Manning, 2015)",
"ref_id": "BIBREF15"
},
{
"start": 388,
"end": 411,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "5.2"
},
{
"text": "For domain adaptation, we use data augmentation method similar to (Chu et al., 2017) by oversampling small in-domain corpus. This simple data augmentation approach does not require any modification in NMT architecture and forces NMT to pay equal/more attention to in-domain training data.",
"cite_spans": [
{
"start": 66,
"end": 84,
"text": "(Chu et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "5.2"
},
{
"text": "In this study, we present methods to improve document-level neural machine translation. Following recently reported results on the task, our experiments also reiterate the fact that incorporating context in translation helps considerably improve the quality. Taking a strong Transformer based baseline model trained on substantial corpus (a concatenation of four corpora RotoWire, Rapid, Euro and News Commentary), context aware document models result in significant improvement in BLEU points. We have also experimented with the effects of corpus replication to adapt to the domain of test corpus. We find it an effective method to improve translation quality and domain adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We have submitted results of our best model (HAN encoder) for German-English direction as reported in Table 3 , for official evaluation. Han encoder model with domain adaptation techniques achieved 43.08 BLEU score. We have computed BLEU scores using Moses multi \u2212 blue.perl script.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 109,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://www.statmt.org/wmt19/ 2 https://sites.google.com/view/wngt20/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For sentence based models, we can select sentences randomly but for document-level models entire document is considered for standard test set.4 https://github.com/neulab/ie-eval 5 In the original approach for document-level NMT, they failed to obtain significant improvements when context increases beyond 3 sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This study is funded by Higher Education Commission of Pakistan's project: National Research Program for Universities (NRPU) (5469/Punjab/NRPU/R&D/HEC/2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluating discourse phenomena in neural machine translation",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Bawden",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.00513"
]
},
"num": null,
"urls": [],
"raw_text": "Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2017. Evaluating discourse phenom- ena in neural machine translation. arXiv preprint arXiv:1711.00513.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An empirical comparison of domain adaptation methods for neural machine translation",
"authors": [
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "385--391",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2061"
]
},
"num": null,
"urls": [],
"raw_text": "Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2017. An empirical comparison of domain adaptation methods for neural machine translation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 385-391, Vancouver, Canada. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A survey of domain adaptation for neural machine translation",
"authors": [
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.00258"
]
},
"num": null,
"urls": [],
"raw_text": "Chenhui Chu and Rui Wang. 2018. A survey of domain adaptation for neural machine translation. arXiv preprint arXiv:1806.00258.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Finetuning for neural machine translation with limited degradation across in-and out-of-domain data. Proceedings of the XVI Machine Translation Summit",
"authors": [
{
"first": "Praveen",
"middle": [],
"last": "Dakwale",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Praveen Dakwale and Christof Monz. 2017. Fine- tuning for neural machine translation with limited degradation across in-and out-of-domain data. Pro- ceedings of the XVI Machine Translation Summit, 117.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Fast domain adaptation for neural machine translation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. CoRR, abs/1612.06897.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Document-level machine translation with word vector models",
"authors": [
{
"first": "Eva Mart\u00ednez",
"middle": [],
"last": "Garcia",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Espa\u00f1a-Bonet",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 18th Annual Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "59--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Mart\u00ednez Garcia, Cristina Espa\u00f1a-Bonet, and Llu\u00eds M\u00e0rquez. 2015. Document-level machine transla- tion with word vector models. In Proceedings of the 18th Annual Conference of the European Asso- ciation for Machine Translation, pages 59-66.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Cache-based document-level statistical machine translation",
"authors": [
{
"first": "Zhengxian",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "909--919",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengxian Gong, Min Zhang, and Guodong Zhou. 2011. Cache-based document-level statistical ma- chine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing, pages 909-919. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Docent: A document-level decoder for phrase-based statistical machine translation",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL 2013 (51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Hardmeier, Sara Stymne, J\u00f6rg Tiedemann, and Joakim Nivre. 2013. Docent: A document-level decoder for phrase-based statistical machine transla- tion. In ACL 2013 (51st Annual Meeting of the Asso- ciation for Computational Linguistics); 4-9 August 2013;",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Exploring transfer learning and domain data selection for the biomedical translation",
"authors": [
{
"first": "Sadaf",
"middle": [],
"last": "Noor-E Hira",
"suffix": ""
},
{
"first": "Kiran",
"middle": [],
"last": "Abdul Rauf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiani",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "156--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noor-e Hira, Sadaf Abdul Rauf, Kiran Kiani, Ammara Zafar, and Raheel Nawaz. 2019. Exploring transfer learning and domain data selection for the biomed- ical translation. In Proceedings of the Fourth Con- ference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 156-163, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Does neural machine translation benefit from larger context? arXiv preprint",
"authors": [
{
"first": "Sebastien",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Stanislas",
"middle": [],
"last": "Lauly",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.05135"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine trans- lation benefit from larger context? arXiv preprint arXiv:1704.05135.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Opennmt: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.02810"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M Rush. 2017. Opennmt: Open- source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Experiments in domain adaptation for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Schroeder",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation, StatMT '07",
"volume": "",
"issue": "",
"pages": "224--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Josh Schroeder. 2007. Experiments in domain adaptation for statistical machine trans- lation. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT '07, pages 224-227, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Modeling coherence for neural machine translation with dynamic and topic caches",
"authors": [
{
"first": "Shaohui",
"middle": [],
"last": "Kuang",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Weihua",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.11221"
]
},
"num": null,
"urls": [],
"raw_text": "Shaohui Kuang, Deyi Xiong, Weihua Luo, and Guodong Zhou. 2017. Modeling coherence for neural machine translation with dynamic and topic caches. arXiv preprint arXiv:1711.11221.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Linguistic knowledge-aware neural machine translation",
"authors": [
{
"first": "Qiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"F"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Lidia",
"middle": [
"S"
],
"last": "Chao",
"suffix": ""
},
{
"first": "Muhua",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)",
"volume": "26",
"issue": "12",
"pages": "2341--2354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiang Li, Derek F Wong, Lidia S Chao, Muhua Zhu, Tong Xiao, Jingbo Zhu, and Min Zhang. 2018. Lin- guistic knowledge-aware neural machine translation. IEEE/ACM Transactions on Audio, Speech and Lan- guage Processing (TASLP), 26(12):2341-2354.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Stanford neural machine translation systems for spoken language domains",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "76--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spo- ken language domains. In Proceedings of the In- ternational Workshop on Spoken Language Transla- tion, pages 76-79.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Document context neural machine translation with memory networks",
"authors": [
{
"first": "Sameen",
"middle": [],
"last": "Maruf",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.03688"
]
},
"num": null,
"urls": [],
"raw_text": "Sameen Maruf and Gholamreza Haffari. 2017. Docu- ment context neural machine translation with mem- ory networks. arXiv preprint arXiv:1711.03688.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Document-level neural machine translation with hierarchical attention networks",
"authors": [
{
"first": "Lesly",
"middle": [],
"last": "Miculicich",
"suffix": ""
},
{
"first": "Dhananjay",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.01576"
]
},
"num": null,
"urls": [],
"raw_text": "Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention net- works. arXiv preprint arXiv:1809.01576.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Analysing concatenation approaches to document-level nmt in two different domains",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Sharid",
"middle": [],
"last": "Lo\u00e1iciga",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Workshop on Discourse in Machine Translation",
"volume": "",
"issue": "",
"pages": "51--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Scherrer, J\u00f6rg Tiedemann, and Sharid Lo\u00e1iciga. 2019. Analysing concatenation approaches to document-level nmt in two different domains. In Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019), pages 51- 61.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06709"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Minimum risk training for neural machine translation",
"authors": [
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.02433"
]
},
"num": null,
"urls": [],
"raw_text": "Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2015. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Combining local and document-level context: The lmu munich neural machine translation system at wmt19",
"authors": [
{
"first": "Dario",
"middle": [],
"last": "Stojanovski",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "400--406",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dario Stojanovski and Alexander Fraser. 2019. Com- bining local and document-level context: The lmu munich neural machine translation system at wmt19. In Proceedings of the Fourth Conference on Ma- chine Translation (Volume 2: Shared Task Papers, Day 1), pages 400-406.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning to remember translation history with a continuous cache",
"authors": [
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "407--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to remember translation history with a continuous cache. Transactions of the Association for Computational Linguistics, 6:407-420.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Context-aware neural machine translation learns anaphora resolution",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Serdyukov",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.10163"
]
},
"num": null,
"urls": [],
"raw_text": "Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine trans- lation learns anaphora resolution. arXiv preprint arXiv:1805.10163.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Exploiting cross-sentence context for neural machine translation",
"authors": [
{
"first": "Longyue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.04347"
]
},
"num": null,
"urls": [],
"raw_text": "Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. arXiv preprint arXiv:1704.04347.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modeling coherence for discourse neural machine translation",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7338--7345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Xiong, Zhongjun He, Hua Wu, and Haifeng Wang. 2019. Modeling coherence for discourse neural ma- chine translation. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 7338-7345.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computa- tional linguistics: human language technologies, pages 1480-1489.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improving the transformer translation model with document-level context",
"authors": [
{
"first": "Jiacheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Jingfang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.03581"
]
},
"num": null,
"urls": [],
"raw_text": "Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. arXiv preprint arXiv:1810.03581.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "",
"content": "<table><tr><td>: Dataset statistics in terms of number of sen-</td></tr><tr><td>tence pairs and documents and the corresponding train,</td></tr><tr><td>test and development split.</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "Table summarizing corpora size and BLEU scores for Transformer based NMT systems.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF6": {
"text": "Table summarizing HAN & NMT Transformer results (Rotowire official test) for adding document context and domain adaptation.",
"content": "<table><tr><td>Models</td><td>Tokens</td><td>BLEU Score</td></tr><tr><td/><td>EN DE</td><td>Reg Rand None</td></tr><tr><td>HAN reg</td><td colspan=\"2\">420M 489M 39.31 39.25 39.08</td></tr><tr><td colspan=\"3\">+HAN rand 188M 224M 37.76 37.76 37.75</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}