ACL-OCL / Base_JSON /prefixN /json /ngt /2020.ngt-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:35.070198Z"
},
"title": "Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation",
"authors": [
{
"first": "Mitchell",
"middle": [
"A"
],
"last": "Gordon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "kevinduh@cs.jhu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We explore best practices for training small, memory efficient machine translation models with sequence-level knowledge distillation in the domain adaptation setting. While both domain adaptation and knowledge distillation are widely-used, their interaction remains little understood. Our large-scale empirical results in machine translation (on three language pairs with three domains each) suggest distilling twice for best performance: once using general-domain data and again using indomain data with an adapted teacher. The code for these experiments can be found here. 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We explore best practices for training small, memory efficient machine translation models with sequence-level knowledge distillation in the domain adaptation setting. While both domain adaptation and knowledge distillation are widely-used, their interaction remains little understood. Our large-scale empirical results in machine translation (on three language pairs with three domains each) suggest distilling twice for best performance: once using general-domain data and again using indomain data with an adapted teacher. The code for these experiments can be found here. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Machine translation systems rely on large amounts of data to deduce the rules underlying translation from one language to another. This presents challenges in some important niche domains, such as patent and medical literature translation, due to the high cost of hiring experts to generate suitable training data. A cost-effective alternative is domain adaptation, which leverages large amounts of parallel documents from less difficult and more readily-available domains, such as movie subtitles and news articles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Domain adaptation works well in practice. However, these large datasets, which we call general domain datasets, introduce some scalability problems. Large datasets require large models; neural machine translation systems can take days or weeks to train. Some models require gigabytes of disk space, making deployment to edge computing devices challenging. They can also require excessive compute during inference, making them slow and costly to scale up in production environments (Gordon, 2019) .",
"cite_spans": [
{
"start": 481,
"end": 495,
"text": "(Gordon, 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To alleviate these issues, knowledge distillation (aka Teacher-Student) (Hinton et al., 2015) is used to compress models into a manageable form. But although knowledge distillation is the most commonly used form of model compression in practice, it is also one of the least understood.",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "(Hinton et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we perform a large-scale empirical analysis to attempt to discover best practices when using knowledge distillation in combination with domain adaptation. Out of several common-sense configurations, we find that two stages of knowledge distillation give the best performance: one using general-domain data and another using in-domain data with an adapted teacher. We perform experiments on multiple language pairs (Russian-English, German-English, Chinese-English), domains (patents, subtitles, news, TED talks), and student sizes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Domain Adaptation helps overcome a lack of quality training data in niche domains by leveraging large amounts of data in a more accessible general-domain. Domain adaptation is usually accomplished by continued training (Luong and Manning, 2015; Zoph et al., 2016) , which involves two steps: 1. A model is randomly initialized and trained until convergence on the general-domain data.",
"cite_spans": [
{
"start": 219,
"end": 244,
"text": "(Luong and Manning, 2015;",
"ref_id": "BIBREF15"
},
{
"start": 245,
"end": 263,
"text": "Zoph et al., 2016)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Step 1 and trained until convergence on the in-domain dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A new model is initialized with the parameters resulting from",
"sec_num": "2."
},
{
"text": "We can consider domain adaptation as extracting a useful inductive-bias from the general-domain dataset, which is encoded and passed along to the in-domain model as a favorable weight initialization. While there are other methods of extracting inductive bias from general-domain datasets (including mixed fine-tuning (Chu et al., 2017) and Configuration 1 is the model which is trained on in-domain data with random initializations and without the assistance of a teacher. cost weighting (Chen et al., 2017) ), continued training is most common and the focus of this paper.",
"cite_spans": [
{
"start": 317,
"end": 335,
"text": "(Chu et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 488,
"end": 507,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A new model is initialized with the parameters resulting from",
"sec_num": "2."
},
{
"text": "Knowledge Distillation is a method for improving the performance of under-parameterized \"Student\" models by exploiting the probability distribution of a more computationally complex \"Teacher\" network. Kim and Rush (2016) presented an extension of knowledge distillation to machine translation in two flavors: word-level and sequence-level knowledge distillation. Sequence-level knowledge distillation, which is more general, involves three steps: 1. A large Teacher network is randomly initialized and trained until convergence on the data.",
"cite_spans": [
{
"start": 201,
"end": 220,
"text": "Kim and Rush (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A new model is initialized with the parameters resulting from",
"sec_num": "2."
},
{
"text": "2. The source-side of the training data is decoded using the Teacher to produce \"distilled\" target data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A new model is initialized with the parameters resulting from",
"sec_num": "2."
},
{
"text": "3. A smaller Student model is randomly initialized and trained until convergence on the distilled source-target pairs (discarding the original target sequences in the data).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A new model is initialized with the parameters resulting from",
"sec_num": "2."
},
{
"text": "The goal of knowledge distillation is to train the student model to mimic the teacher's probability distribution over translations. Since the teacher and the student are trained on the same dataset, they should be capable of learning the same distribution in theory. In practice, however, pre-processing the training data with the teacher improves student test performance. 2 Explanations for this phenomenon 2 Interestingly, this can be true even when the student has include dark knowledge (Furlanello et al., 2018) , mode reduction (Zhou et al., 2019) , and regularization (Gordon and Duh, 2019; Dong et al., 2019) , but no definitive evidence has been given.",
"cite_spans": [
{
"start": 492,
"end": 517,
"text": "(Furlanello et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 535,
"end": 554,
"text": "(Zhou et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 576,
"end": 598,
"text": "(Gordon and Duh, 2019;",
"ref_id": "BIBREF9"
},
{
"start": 599,
"end": 617,
"text": "Dong et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A new model is initialized with the parameters resulting from",
"sec_num": "2."
},
{
"text": "Sequence-level knowledge distillation is widely used in both industry (Xia et al., 2019) and research and is the second focus of this paper. 3",
"cite_spans": [
{
"start": 70,
"end": 88,
"text": "(Xia et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A new model is initialized with the parameters resulting from",
"sec_num": "2."
},
{
"text": "How domain adaptation and knowledge distillation would interact when applied in combination was not previously clear. Specifically, our research questions are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distilling and Adapting",
"sec_num": "3"
},
{
"text": "\u2022 Is a distilled model easier or harder to adapt to new domains?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distilling and Adapting",
"sec_num": "3"
},
{
"text": "\u2022 Should knowledge distillation be used on indomain data? If so, how should the teacher be trained?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distilling and Adapting",
"sec_num": "3"
},
{
"text": "To answer these questions, we performed experiments on 9 possible configurations which are assigned configuration numbers in Figure 1 . For ease of reference, we will primarily refer to small, in-domain models by their configuration number and encourage readers to consult Figure 1 . Each configuration has two attributes of interest.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 133,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 273,
"end": 281,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Distilling and Adapting",
"sec_num": "3"
},
{
"text": "the same computational resources as the teacher (Furlanello et al., 2018) Distilling In-Domain Data How is in-domain data pre-processed using knowledge distillation? Some models are trained with no pre-processing (configurations 1, 4, and 7), while others use a teacher to pre-process the in-domain training data. This teacher might be a baseline trained on indomain data only (configurations 2, 5, and 8) or it can be trained on general-domain data and then adapted to in-domain via continued training (configurations 3, 6, and 9).",
"cite_spans": [
{
"start": 48,
"end": 73,
"text": "(Furlanello et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distilling and Adapting",
"sec_num": "3"
},
{
"text": "Initialization How are models initialized? A model might be randomly initialized (configurations 1, 2, and 3), or it might be adapted from a model trained on general-domain data. This general-domain model might be a baseline trained directly on the general-domain data (configurations 4, 5, and 6) or it might be a student model trained on the output of a general-domain teacher (configurations 7, 8, 9).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distilling and Adapting",
"sec_num": "3"
},
{
"text": "General-Domain Data We train models in multiple settings: 3 language pairs (German-English, Russian-English, and Chinese-English) each with 1 general-domain dataset and 2 different in-domain datasets. The general-domain datasets for each language are a concatenation of data from Open-Subtitles2018 (Tiedemann, 2016; Lison and Tiedemann, 2016 ) (which contains translated movie subtitles) and the WMT 2017 datasets (Ondrej et al., 2017) (which includes a variety of sources, including news commentary, parliamentary proceedings, and web-crawled data).",
"cite_spans": [
{
"start": 299,
"end": 316,
"text": "(Tiedemann, 2016;",
"ref_id": "BIBREF25"
},
{
"start": 317,
"end": 342,
"text": "Lison and Tiedemann, 2016",
"ref_id": "BIBREF13"
},
{
"start": 415,
"end": 436,
"text": "(Ondrej et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "In-Domain Data We use the World International Property Organization (WIPO) COPPA-V2 dataset (Junczys-Dowmunt et al., 2018) and the TED Talks dataset (Duh, 2019a) as our two in-domain datasets. The WIPO data contains parallel sentences from international patent abstracts, while the TED Talks dataset consists of translated transcripts of public speeches.",
"cite_spans": [
{
"start": 149,
"end": 161,
"text": "(Duh, 2019a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "The size of each training dataset is presented in Table 1 . General-domain datasets contain tens of millions of sentences, while indomain datasets contain much less. German-English WIPO has an exceptional amount of training data (4.5 times more than the next biggest indomain dataset) and helps qualify how our results might change when more in-domain data is available.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Statistics",
"sec_num": null
},
{
"text": "Pre-processing All datasets are tokenized using the Moses 4 tokenizer. A BPE vocabulary (Sennrich et al., 2016) of 30,000 tokens is constructed for each language using the training set of the general-domain data. This BPE vocabulary is then applied to both in-domain and general-domain datasets. This mimics the typical scenario of a single, general-domain model being trained and then adapted to new domains as they are encountered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Statistics",
"sec_num": null
},
{
"text": "Note that re-training BPE on in-domain data to produce a different vocabulary would force us to re-build the model, making adaptation impossible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Statistics",
"sec_num": null
},
{
"text": "Evaluation The general-domain development set for each language contains newstest2016 concatenated with the last 2500 lines of OpenSubti-tles2018. We reserve 3000 lines of WIPO to use as the in-domain development set. TED talks development sets are provided by the authors and contain around 2000 lines each. Evaluations of each model are performed by decoding the appropriate development set with a beam-search size of 10 and comparing to the reference using multi-bleu.perl from the Moses toolkit. The tokenization used during multi-bleu.perl evaluation is the same as the one provided in (Duh, 2019a) .",
"cite_spans": [
{
"start": 591,
"end": 603,
"text": "(Duh, 2019a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Statistics",
"sec_num": null
},
{
"text": "A list of architecture sizes is provided in Table 2 . Teachers are trained using the Large hyperparameter settings, while we experiment with Medium, Small, and Tiny students for each configuration and language/domain setting. All models are Transformers (Vaswani et al., 2017) . We use the same hyper-parameters (which are based on a template from (Duh, 2019b) 5 ) for every model, except those that affect the size of the model (Table 2) model does not improve for 10 checkpoints (earlystopping), whichever comes first.",
"cite_spans": [
{
"start": 255,
"end": 277,
"text": "(Vaswani et al., 2017)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 44,
"end": 52,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 430,
"end": 439,
"text": "(Table 2)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Architectures and Training",
"sec_num": "4.2"
},
{
"text": "Continued Training Work by (Gordon and Duh, 2019) suggests that students may benefit from training on some combination of the distilled and undistilled reference dataset. We experimented with this by continuing to train each in-domain student model on the original, un-distilled dataset, using similar stopping criterion to the first round of training. This improved some models by up to 1 BLEU. Because of this, we recommend that any distilled model continue training on the original dataset as long as development accuracy improves. When continued training improves performance of a student, we show that score instead of the score without continued training.",
"cite_spans": [
{
"start": 27,
"end": 49,
"text": "(Gordon and Duh, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures and Training",
"sec_num": "4.2"
},
{
"text": "In this section, we compare training in-domain models with no teacher (config 1), a teacher trained on in-domain data only (config 2), and a teacher adapted from the general domain (config 3). The performance of the two teachers in each languagepair and domain is listed in Table 3 . It shows that adaptation greatly improves the performance of every in-domain teacher except German-English WIPO. 6 Table 4 shows the results of using these teachers to distill the in-domain data before training student models in various settings. We see that in almost every case, using an adapted teacher gives the best or close to the best results. This is somewhat expected since models with better development scores tend to make better teachers ( Table 3 : BLEU development score of in-domain teachers when either randomly initialized or initialized from the weights of a large model trained on general-domain data. Adaptation drastically improves performance on every language pair and domain, except de-en WIPO. et al., 2018). Although knowledge distillation is typically seen as \"simplifying\" data for students, in this case we suspect that the adapted teacher's knowledge about the general-domain is making its way to students via the distilled in-domain data.",
"cite_spans": [
{
"start": 734,
"end": 735,
"text": "(",
"ref_id": null
}
],
"ref_spans": [
{
"start": 274,
"end": 281,
"text": "Table 3",
"ref_id": null
},
{
"start": 399,
"end": 406,
"text": "Table 4",
"ref_id": null
},
{
"start": 736,
"end": 743,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adapt Teachers",
"sec_num": "5.1"
},
{
"text": "We also train small models directly on the generaldomain data and adapt them to in-domain data. The possible configurations are random initialization (config 1), initializing from a baseline model trained on general-domain data (config 4), or initializing from a student model distilled from a generaldomain teacher (config 7). Table 5 shows the performance of the models trained on the generaldomain datasets, and Table 6 shows their performance after being fine-tuned on in-domain data.",
"cite_spans": [],
"ref_spans": [
{
"start": 328,
"end": 335,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 415,
"end": 422,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adapt the Best Student",
"sec_num": "5.2"
},
{
"text": "Training small models directly on the generaldomain data and then fine-tuning on in-domain data gives much more substantial gains (5-10 BLEU) than providing indirect access to the generaldomain data through an adapted teacher (config 3). We believe this is because a large amount of data is required to fully reveal the teacher's probability distribution over translations (Fang et al., 2019) . While an adapted teacher might contain much information from the general-domain, it is unable to express that knowledge to students just by translating the smaller in-domain dataset. To get the full benefit of general-domain data, the small models must be directly pre-trained on general-domain data. 7 Indirect access to the general-domain data through a general-domain teacher is insufficient.",
"cite_spans": [
{
"start": 373,
"end": 392,
"text": "(Fang et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapt the Best Student",
"sec_num": "5.2"
},
{
"text": "We also observe that Medium-sized models are not small enough to benefit from knowledge distillation in the general-domain, and so their generaldomain scores do not improve with distillation. Table 4 : BLEU development scores for in-domain students with no teacher (config 1), an in-domain only teacher (config 2), or an adapted teacher continued from the general-domain (config 3). In almost every case, using an adapted teacher gives the best or close to the best results.",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 199,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adapt the Best Student",
"sec_num": "5.2"
},
{
"text": "These distilled Medium-sized models (config 7) also tend to do slightly worse than their baseline counter-parts (config 4) on in-domain data. Indeed, Figure 2 shows that in-domain performance is roughly linearly related to general-domain performance regardless of whether distillation is applied before adaptation. This implies that distillation does not interfere with the adaptability of a model, so the model with the best general-domain performance should be adapted, regardless of whether distillation was applied. Adapting a distilled model can improve performance slightly over adapting the baseline model without distillation.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adapt the Best Student",
"sec_num": "5.2"
},
{
"text": "Finally, we test whether these two ways of improving small, in-domain models are orthogonal. We might hypothesize that training small models directly on general-domain data eliminates the need to adapt teachers or use an in-domain teacher at all. To test this, we also train adapted student models using a baseline teacher (config 8) and an adapted teacher (config 9). Table 6 : In-domain models that are initialized randomly (config 1), initialized from a baseline trained on general-domain data directly (config 4), or initialized from a general-domain student trained using a generaldomain teacher (config 7). Figure 2 : The BLEU of general-domain models vs. their corresponding in-domain scores when adapted to a different domain. We see that in-domain performance is roughly linearly related to general-domain performance regardless of whether distillation is applied before adaptation. Table 7 : In-domain models which are initialized from a general-domain student and trained on in-domain data which is pre-processed either with no teacher (config 7), an in-domain only teacher (config 8), or an adapted teacher continued from general-domain data (config 9).",
"cite_spans": [],
"ref_spans": [
{
"start": 369,
"end": 376,
"text": "Table 6",
"ref_id": null
},
{
"start": 613,
"end": 621,
"text": "Figure 2",
"ref_id": null
},
{
"start": 892,
"end": 899,
"text": "Table 7",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Distill, Adapt, Distill",
"sec_num": "5.3"
},
{
"text": "ing in-domain data with an adapted teacher can further boost performance of an already distilled model, while using a teacher trained only on in-domain data can sometimes hurt performance. These results lead us to a general recipe for training small, in-domain models using knowledge distillation and domain adaptation in combination:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distill, Adapt, Distill",
"sec_num": "5.3"
},
{
"text": "1. Distill general-domain data to improve general-domain student performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distill, Adapt, Distill",
"sec_num": "5.3"
},
{
"text": "2. Adapt the best model from Step 1 to indomain data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distill, Adapt, Distill",
"sec_num": "5.3"
},
{
"text": "(2-10 BLEU better than no adaptation)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distill, Adapt, Distill",
"sec_num": "5.3"
},
{
"text": "3. Adapt the teacher and distill again in-domain. (0-2 BLEU better than no or non-adapted teacher)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distill, Adapt, Distill",
"sec_num": "5.3"
},
{
"text": "Following this procedure will result in either configuration 6 or 9 as described in Figure 1 . And indeed, configuration 9 performs the best or near best (within 0.1 BLEU) in almost every case, as shown in Table 9 . For those Medium sized models which were not improved by distillation in the general-domain, configuration 6 performs the best.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 92,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 206,
"end": 213,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distill, Adapt, Distill",
"sec_num": "5.3"
},
{
"text": "Models trained on German-English WIPO are an exception, with adaptation from the generaldomain not improving performance. This is in line Table 8 : Development scores for models initialized from a model trained on general-domain data. The indomain data is pre-processed with a teacher adapted from the general-domain (config 6).",
"cite_spans": [],
"ref_spans": [
{
"start": 138,
"end": 145,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distill, Adapt, Distill",
"sec_num": "5.3"
},
{
"text": "Domain Size de-en ru-en zh-en med 4/6 6 4/6/7/9 ted small 6/9 9 9 tiny 9 9 9 med 2 4/6/9 6 wipo small 3 4/6 9 tiny 8 7/9 9 Table 9 : Best configurations for each setting. Scores within 0.1 BLEU of the best are also listed. Configuration 9 generally performs best, while configuration 6 is best for those medium-sized models which were not improved by distillation in the general-domain.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distill, Adapt, Distill",
"sec_num": "5.3"
},
{
"text": "with the results from Table 3 which shows adaptation does not improve teachers, either. We suspect this is because the German-English WIPO dataset is the biggest out of any in-domain dataset, making adaptation unnecessary. Future work might also benefit from a quantification of domain similarity between datasets (Britz et al., 2017) , which would guide the use of domain adaptation in cases like these.",
"cite_spans": [
{
"start": 314,
"end": 334,
"text": "(Britz et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distill, Adapt, Distill",
"sec_num": "5.3"
},
{
"text": "The models trained in this work collectively required 10 months of single-GPU compute time. Table 10 breaks this down by model size and dataset. While distilling twice might give the best performance, it also increases the amount of computation time required. Rather than training a single indomain model, configuration 9 requires training a general-domain teacher, a general-domain student, and then adapting both. This can increase compute required to train models by 2-4x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Times",
"sec_num": "5.4"
},
{
"text": "A huge portion of computation was also spent on decoding the general-domain data using a teacher model for sequence-level knowledge distillation, which could take up to 24 days of GPU time (using a beam size of 10 and a batch size of 10). This Model Gen-Domain In-Domain Adapting Large 2-4 days 2-4 days 7-48 hrs Med 2-4 days 2-4 days 1-48 hrs Small 1-2 days 1-2 days 2-14 hrs Tiny 1 days 1-24 hrs 2-24 hrs Distill 10-24 days 1-2 days Table 10 : Estimates of the computation time required for training randomly initialized models on just generaldomain data or just in-domain data. We also show the time required for adapting general-domain models and distilling data using teachers.",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 464,
"text": "Adapting Large 2-4 days 2-4 days 7-48 hrs Med 2-4 days 2-4 days 1-48 hrs Small 1-2 days 1-2 days 2-14 hrs Tiny 1 days 1-24 hrs 2-24 hrs Distill 10-24 days 1-2 days Table 10",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Training Times",
"sec_num": "5.4"
},
{
"text": "can be arbitrarily sped up using multiple GPUs in parallel, but future work might explore how to distill teachers in a less expensive way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Times",
"sec_num": "5.4"
},
{
"text": "Our work is one the few that focuses specifically on training small, under-parameterized in-domain models. There is, however, similar work which is not directly comparable but uses knowledge distillation to adapt to new domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Knowledge Adaptation uses knowledge distillation to transfer knowledge from multiple, labeled source domains to un-labeled target domains. This is in contrast to our setting, which has labels for both general-domain and in-domain data. Ruder et al. (2017) introduced this idea as \"Knowledge Adaptation,\" using multi-layer perceptrons to provide sentiment analysis labels for unlabeled indomain data. Similar work includes Iterative Dual Domain Adaptation (Zeng et al., 2019) and Domain Transformation Networks . These ideas are not limited to machine translation; recent work by Meng et al. (2020) trains in-domain speech recognition systems with knowledge distillation, while Orbes-Arteaga et al. (2019) does similar work on segmentation of magnetic resonance imaging scans.",
"cite_spans": [
{
"start": 236,
"end": 255,
"text": "Ruder et al. (2017)",
"ref_id": "BIBREF19"
},
{
"start": 455,
"end": 474,
"text": "(Zeng et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 579,
"end": 597,
"text": "Meng et al. (2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Compressing Pre-trained Language Models Domain adaptation via continued training in NMT is closely related to the idea of pre-training a language model and fine-tuning to different tasks, which might come from different data distributions than the pre-training data. Because language models tend to be extremely cumbersome to train and evaluate, more focus is given to the compression aspect of knowledge distillation. Sanh et al. (2019) , Sun et al. (2019) , and independently showed that knowledge distillation could be used to compress pre-trained models without affecting downstream tasks. Tang et al. (2019) showed that task-specific information could be distilled from a large Transformer into a much smaller Bi-directional RNN. These methods might reasonably be extended to domain adaptation for NMT.",
"cite_spans": [
{
"start": 419,
"end": 437,
"text": "Sanh et al. (2019)",
"ref_id": "BIBREF20"
},
{
"start": 440,
"end": 457,
"text": "Sun et al. (2019)",
"ref_id": "BIBREF23"
},
{
"start": 594,
"end": 612,
"text": "Tang et al. (2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this work, we conducted a large-scale empirical investigation to determine best practices when using sequence-level knowledge distillation and domain adaptation in combination. We found that adapting models from the general-domain makes them better teachers and that distilling using general-domain data does not impact a model's adaptability. This leads us to recommend distilling twice for best results: once in the general-domain to possibly improve student performance, and again using an adapted in-domain teacher. The results are robust among multiple language pairs, student sizes, in-domain settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://git.io/Jf2t8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sequence-level knowledge distillation is also commonly used to train non-autoregressive machine translation models(Zhou et al., 2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "statmt.org/moses 5 https://git.io/JvL85",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "German WIPO is also the largest in-domain dataset we test, which might make adaptation unnecessary. Another explanation might be that the German-English general-domain is not similar enough to the patent domain in this case to improve performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A reasonable alternative to this might include data-freeKD (Yin et al., 2019), which explores the teacher's probability distribution without any dependence on data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Effective domain mixing for neural machine translation",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Britz",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Reid",
"middle": [],
"last": "Pryzant",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "118--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Britz, Quoc Le, and Reid Pryzant. 2017. Ef- fective domain mixing for neural machine transla- tion. In Proceedings of the Second Conference on Machine Translation, pages 118-126.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Cost weighting for neural machine translation domain adaptation",
"authors": [
{
"first": "Boxing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Larkin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "40--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boxing Chen, Colin Cherry, George Foster, and Samuel Larkin. 2017. Cost weighting for neural ma- chine translation domain adaptation. In Proceedings of the First Workshop on Neural Machine Transla- tion, pages 40-46, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An empirical comparison of domain adaptation methods for neural machine translation",
"authors": [
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "385--391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2017. An empirical comparison of domain adaptation methods for neural machine translation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 385-391, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distillation \u2248 early stopping? harvesting dark knowledge utilizing anisotropic information retrieval for overparameterized neural network",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jikai",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Yiping",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Zhihua",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bin Dong, Jikai Hou, Yiping Lu, and Zhihua Zhang. 2019. Distillation \u2248 early stopping? harvesting dark knowledge utilizing anisotropic information re- trieval for overparameterized neural network.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The multitarget TED talks task (MTTT)",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Duh. 2019a. The multitarget TED talks task (MTTT).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Data-Free adversarial distillation",
"authors": [
{
"first": "Gongfan",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Chengchao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xinchao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Da",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mingli",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gongfan Fang, Jie Song, Chengchao Shen, Xinchao Wang, Da Chen, and Mingli Song. 2019. Data-Free adversarial distillation.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Born again neural networks",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Furlanello",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zachary",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Lipton",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Tschannen",
"suffix": ""
},
{
"first": "Anima",
"middle": [],
"last": "Itti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Anandkumar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommaso Furlanello, Zachary C Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born again neural networks.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "All the ways you can compress bert",
"authors": [
{
"first": "Mitchell",
"middle": [
"A"
],
"last": "Gordon",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell A. Gordon. 2019. All the ways you can com- press bert.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Explaining Sequence-Level knowledge distillation as Data-Augmentation for neural machine translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gordon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell A Gordon and Kevin Duh. 2019. Explain- ing Sequence-Level knowledge distillation as Data- Augmentation for neural machine translation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "COPPA v2. 0: Corpus of parallel patent applications building large parallel corpora with GNU make",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Pouliquen",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Mazenc",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Bruno Pouliquen, and Christophe Mazenc. 2018. COPPA v2. 0: Corpus of parallel patent applications building large parallel corpora with GNU make.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sequence-Level knowledge distillation",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim and Alexander M Rush. 2016. Sequence- Level knowledge distillation.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Opensub-titles2016: Extracting large parallel corpora from movie and tv subtitles",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. Opensub- titles2016: Extracting large parallel corpora from movie and tv subtitles.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Attentive student meets Multi-Task teacher: Improved knowledge distillation for pretrained models",
"authors": [
{
"first": "Linqing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linqing Liu, Huan Wang, Jimmy Lin, Richard Socher, and Caiming Xiong. 2019. Attentive student meets Multi-Task teacher: Improved knowledge distilla- tion for pretrained models.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Stanford neural machine translation systems for spoken language domains",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "76--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spo- ken language domains. In Proceedings of the In- ternational Workshop on Spoken Language Transla- tion, pages 76-79.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Domain adaptation via Teacher-Student learning for End-to-End speech recognition",
"authors": [
{
"first": "Zhong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Jinyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yashesh",
"middle": [],
"last": "Gaur",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Gong",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhong Meng, Jinyu Li, Yashesh Gaur, and Yifan Gong. 2020. Domain adaptation via Teacher-Student learn- ing for End-to-End speech recognition.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Findings of the 2017 conference on machine translation (wmt17)",
"authors": [
{
"first": "Bojar",
"middle": [],
"last": "Ondrej",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Federmann",
"middle": [],
"last": "Christian",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Yvette",
"suffix": ""
},
{
"first": "Haddow",
"middle": [],
"last": "Barry",
"suffix": ""
},
{
"first": "Huck",
"middle": [],
"last": "Matthias",
"suffix": ""
},
{
"first": "Koehn",
"middle": [],
"last": "Philipp",
"suffix": ""
},
{
"first": "Logacheva",
"middle": [],
"last": "Liu Qun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Varvara",
"suffix": ""
}
],
"year": 2017,
"venue": "Second Conference onMachine Translation",
"volume": "",
"issue": "",
"pages": "169--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bojar Ondrej, Rajen Chatterjee, Federmann Christian, Graham Yvette, Haddow Barry, Huck Matthias, Koehn Philipp, Liu Qun, Logacheva Varvara, Monz Christof, and Others. 2017. Findings of the 2017 conference on machine translation (wmt17). In Sec- ond Conference onMachine Translation, pages 169- 214.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Knowledge distillation for semi-supervised domain adaptation",
"authors": [
{
"first": "Mauricio",
"middle": [],
"last": "Orbes-Arteaga",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Cardoso",
"suffix": ""
},
{
"first": "Lauge",
"middle": [],
"last": "S\u00f8rensen",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Igel",
"suffix": ""
},
{
"first": "Sebastien",
"middle": [],
"last": "Ourselin",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Modat",
"suffix": ""
},
{
"first": "Mads",
"middle": [],
"last": "Nielsen",
"suffix": ""
},
{
"first": "Akshay",
"middle": [],
"last": "Pai",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauricio Orbes-Arteaga, Jorge Cardoso, Lauge S\u00f8rensen, Christian Igel, Sebastien Ourselin, Marc Modat, Mads Nielsen, and Akshay Pai. 2019. Knowledge distillation for semi-supervised domain adaptation.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Knowledge adaptation: Teaching to adapt",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Parsa",
"middle": [],
"last": "Ghaffari",
"suffix": ""
},
{
"first": "John G",
"middle": [],
"last": "Breslin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Parsa Ghaffari, and John G Breslin. 2017. Knowledge adaptation: Teaching to adapt.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1715-1725,",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "P",
"middle": [
"A"
],
"last": "Stroudsburg",
"suffix": ""
},
{
"first": "Usa",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Patient knowledge distillation for BERT model compression",
"authors": [
{
"first": "Siqi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4314--4323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for BERT model com- pression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4314-4323, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distilling Task-Specific knowledge from BERT into simple neural networks",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Linqing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Vechtomova",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling Task- Specific knowledge from BERT into simple neural networks.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Finding alternative translations in a large corpus of movie subtitle",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "3518--3522",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2016. Finding alternative translations in a large corpus of movie subtitle. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3518- 3522.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Go from the general to the particular: Multi-Domain translation with domain transformation networks",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Longyue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "O K",
"middle": [],
"last": "Victor",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Wang, Longyue Wang, Shuming Shi, Victor O K Li, and Zhaopeng Tu. 2019. Go from the general to the particular: Multi-Domain translation with do- main transformation networks.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Microsoft research asia's systems for WMT19",
"authors": [
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Weicong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Linyuan",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Yichong",
"middle": [],
"last": "Leng",
"suffix": ""
},
{
"first": "Renqian",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Yiren",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Others",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "424--433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yingce Xia, Xu Tan, Fei Tian, Fei Gao, Weicong Chen, Yang Fan, Linyuan Gong, Yichong Leng, Renqian Luo, Yiren Wang, and Others. 2019. Microsoft re- search asia's systems for WMT19. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 424- 433.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Dreaming to distill: Data-free knowledge transfer via DeepInversion",
"authors": [
{
"first": "Pavlo",
"middle": [],
"last": "Hongxu Yin",
"suffix": ""
},
{
"first": "Zhizhong",
"middle": [],
"last": "Molchanov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jose",
"suffix": ""
},
{
"first": "Arun",
"middle": [],
"last": "Alvarez",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Mallya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hoiem",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Niraj",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jha",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongxu Yin, Pavlo Molchanov, Zhizhong Li, Jose M Alvarez, Arun Mallya, Derek Hoiem, Niraj K Jha, and Jan Kautz. 2019. Dreaming to distill: Data-free knowledge transfer via DeepInversion.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Iterative dual domain adaptation for neural machine translation",
"authors": [
{
"first": "Jiali",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Yubin",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Yaojie",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Yongjing",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiali Zeng, Yang Liu, Jinsong Su, Yubin Ge, Yaojie Lu, Yongjing Yin, and Jiebo Luo. 2019. Iterative dual domain adaptation for neural machine translation.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Analyzing knowledge distillation in neural machine translation",
"authors": [
{
"first": "Dakun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Josep",
"middle": [],
"last": "Crego",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
}
],
"year": 2005,
"venue": "2018 International Workshop on Spoken Language Translation, IWSLT 2005",
"volume": "",
"issue": "",
"pages": "68--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dakun Zhang, Josep Crego, and Jean Senellart. 2018. Analyzing knowledge distillation in neural machine translation. In 2018 International Workshop on Spo- ken Language Translation, IWSLT 2005, Pittsburgh, PA, USA, October 24-25, 2005, pages 68-75.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Understanding knowledge distillation in nonautoregressive machine translation",
"authors": [
{
"first": "Chunting",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chunting Zhou, Jiatao Gu, and Graham Neubig. 2019. Understanding knowledge distillation in non- autoregressive machine translation.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Transfer learning for Low-Resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1568--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for Low-Resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "There are 9 possible configurations for training small, in-domain models with knowledge distillation and domain adaptation. Models trained on general-domain data are shown on the left, and in-domain models are shown on the right. Solid arrows represent domain adaptation via continued training. Dashed arrows represent improved optimization via sequence-level knowledge distillation."
},
"TABREF1": {
"type_str": "table",
"text": "The number of training sentences in each dataset.",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"text": ". Models are trained either for 300,000 updates, 100 epochs, or until the",
"content": "<table><tr><td>Size</td><td colspan=\"3\">Layers FF Size Hidden Size</td></tr><tr><td>Large</td><td>12</td><td>2048</td><td>512</td></tr><tr><td>Medium</td><td>6</td><td>2048</td><td>512</td></tr><tr><td>Small</td><td>6</td><td>1024</td><td>256</td></tr><tr><td>Tiny</td><td>2</td><td>1024</td><td>256</td></tr><tr><td/><td/><td/><td>113</td></tr></table>",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"text": "Hyper-parameters of various model sizes used in this work. For example, the Large Transformer model architecture uses 6 encoder and 6 decoder layers, a feed-forward hidden dimension of 2048 at each layer, and a word-embedding / hidden dimension of 512.",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"text": "19.38 14.79 GD Lrg 37.64 26.57 20.45 wipo Lrg Rand 48.31 21.36 31.02 GD Lrg 48.56 37.08 36.80",
"content": "<table><tr><td colspan=\"2\">Domain Size Init</td><td>de-en ru-en zh-en</td></tr><tr><td>ted</td><td>Lrg Rand</td><td>29.25</td></tr><tr><td>Zhang</td><td/><td/></tr></table>",
"html": null,
"num": null
},
"TABREF6": {
"type_str": "table",
"text": "",
"content": "<table><tr><td>shows that distilling a second time us-</td></tr></table>",
"html": null,
"num": null
},
"TABREF7": {
"type_str": "table",
"text": "General-domain models, teachers and students.",
"content": "<table><tr><td colspan=\"4\">While knowledge distillation improves small and tiny</td></tr><tr><td colspan=\"4\">models, it appears medium-sized models are not under-</td></tr><tr><td colspan=\"4\">parameterized enough for knowledge distillation to im-</td></tr><tr><td colspan=\"2\">prove performance.</td><td/><td/></tr><tr><td colspan=\"2\">Domain Size</td><td colspan=\"2\">Cfg # de-en ru-en zh-en</td></tr><tr><td/><td/><td>1</td><td>27.73 19.34 15.17</td></tr><tr><td>ted</td><td>med</td><td>4</td><td>36.94 25.82 20.13</td></tr><tr><td/><td/><td>7</td><td>35.93 25.43 20.18</td></tr><tr><td/><td/><td>1</td><td>27.89 18.42 14.87</td></tr><tr><td/><td>small</td><td>4</td><td>34.78 24.10 18.84</td></tr><tr><td/><td/><td>7</td><td>35.33 24.30 19.32</td></tr><tr><td/><td/><td>1</td><td>25.78 17.48 13.03</td></tr><tr><td/><td>tiny</td><td>4</td><td>31.52 21.30 16.51</td></tr><tr><td/><td/><td>7</td><td>32.30 21.65 17.06</td></tr><tr><td/><td/><td>1</td><td>48.89 24.45 30.13</td></tr><tr><td>wipo</td><td>med</td><td>4</td><td>48.58 35.98 35.33</td></tr><tr><td/><td/><td>7</td><td>48.53 35.55 35.27</td></tr><tr><td/><td/><td>1</td><td>47.94 21.91 30.66</td></tr><tr><td/><td>small</td><td>4</td><td>48.13 35.30 34.90</td></tr><tr><td/><td/><td>7</td><td>48.31 35.18 34.52</td></tr><tr><td/><td/><td>1</td><td>44.15 21.39 27.67</td></tr><tr><td/><td>tiny</td><td>4</td><td>46.06 31.13 28.45</td></tr><tr><td/><td/><td>7</td><td>46.54 31.74 29.07</td></tr></table>",
"html": null,
"num": null
}
}
}
}