ACL-OCL / Base_JSON /prefixN /json /ngt /2020.ngt-1.25.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:46.498742Z"
},
"title": "Efficient and High-Quality Neural Machine Translation with OpenNMT",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SYSTRAN",
"location": {
"addrLine": "5 rue Feydeau",
"postCode": "75002",
"settlement": "Paris",
"country": "France"
}
},
"email": ""
},
{
"first": "Dakun",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SYSTRAN",
"location": {
"addrLine": "5 rue Feydeau",
"postCode": "75002",
"settlement": "Paris",
"country": "France"
}
},
"email": ""
},
{
"first": "Cl\u00e9ment",
"middle": [],
"last": "Chouteau",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SYSTRAN",
"location": {
"addrLine": "5 rue Feydeau",
"postCode": "75002",
"settlement": "Paris",
"country": "France"
}
},
"email": ""
},
{
"first": "Josep",
"middle": [],
"last": "Crego",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SYSTRAN",
"location": {
"addrLine": "5 rue Feydeau",
"postCode": "75002",
"settlement": "Paris",
"country": "France"
}
},
"email": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SYSTRAN",
"location": {
"addrLine": "5 rue Feydeau",
"postCode": "75002",
"settlement": "Paris",
"country": "France"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the OpenNMT submissions to the WNGT 2020 efficiency shared task. We explore training and acceleration of Transformer models with various sizes that are trained in a teacher-student setup. We also present a custom and optimized C++ inference engine that enables fast CPU and GPU decoding with few dependencies. By combining additional optimizations and parallelization techniques, we create small, efficient, and highquality neural machine translation models.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the OpenNMT submissions to the WNGT 2020 efficiency shared task. We explore training and acceleration of Transformer models with various sizes that are trained in a teacher-student setup. We also present a custom and optimized C++ inference engine that enables fast CPU and GPU decoding with few dependencies. By combining additional optimizations and parallelization techniques, we create small, efficient, and highquality neural machine translation models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper describes the OpenNMT (Klein et al., 2017) submissions to the Workshop on Neural Generation and Translation 2020 efficiency shared task. For WNMT 2018, we explored training and optimizations of small LSTM translation models combined with a customized runtime . While this resulted in interesting decoding speed, there was still room for improvements in terms of quality, memory usage, and overall efficiency.",
"cite_spans": [
{
"start": 33,
"end": 53,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For this 2020 edition, we focus on the standard Transformer architecture (Vaswani et al., 2017) that is now commonly used in production machine translation systems. Similar to our first participation, we train smaller models using the teacherstudent technique (Kim and Rush, 2016) . We experiment with several encoder and decoder sizes following the work by Hongfei et al. (2020) which shows that reducing the number of decoder layers can improve decoding speed at a very limited accuracy cost.",
"cite_spans": [
{
"start": 73,
"end": 95,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 260,
"end": 280,
"text": "(Kim and Rush, 2016)",
"ref_id": "BIBREF3"
},
{
"start": 358,
"end": 379,
"text": "Hongfei et al. (2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also keep the approach of running the models with a custom C++ runtime. This year we present CTranslate2 1 , an optimized and production-grade 1 https://github.com/OpenNMT/ CTranslate2 inference engine for OpenNMT models that enables fast CPU and GPU decoding with few dependencies. This library implements several optimizations for decoding neural machine translation models such as 8-bit quantization, parallel translations, caching, and dynamic target vocabulary reduction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 2 of this paper describes the data preparation and the training procedures we apply to train the candidate models. Section 3 presents the various optimizations we implemented to reduce model size and improve runtime efficiency. Finally, Section 4 details the accuracy and efficiency results achieved by the submitted models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We train our systems using a teacher-student approach (Kim and Rush, 2016) . First, a large model (the teacher) is trained on all available bilingual data, including synthetic data such as backtranslations of monolingual target sentences (Sennrich et al., 2016; and translations of monolingual source sentences (Zhang and Zong, 2016) . Model ensembles are also typically used to build stronger teacher systems.",
"cite_spans": [
{
"start": 54,
"end": 74,
"text": "(Kim and Rush, 2016)",
"ref_id": "BIBREF3"
},
{
"start": 238,
"end": 261,
"text": "(Sennrich et al., 2016;",
"ref_id": "BIBREF13"
},
{
"start": 311,
"end": 333,
"text": "(Zhang and Zong, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Teacher-student training",
"sec_num": "2"
},
{
"text": "Then, a small model (the student) is trained by means of minimizing the loss between the student and teacher systems with the goal of distilling the knowledge of the teacher (Kim and Rush, 2016; ) into a smaller model with comparable accuracy results. Crego and Senellart (2016) show that student models can even outperform to some extent their teacher counterparts.",
"cite_spans": [
{
"start": 174,
"end": 194,
"text": "(Kim and Rush, 2016;",
"ref_id": "BIBREF3"
},
{
"start": 252,
"end": 278,
"text": "Crego and Senellart (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Teacher-student training",
"sec_num": "2"
},
{
"text": "Knowledge distillation is an effective approach to reduce the model size, thus lowering memory and computation requirements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Teacher-student training",
"sec_num": "2"
},
{
"text": "As suggested in the task description and given the limited amount of time available, we use Face-book's WMT 2019 system as our teacher model (Ng et al., 2019) . The system is trained as an ensemble of big Transformer models for both directions, English-German and German-English. Table 1 shows the BLEU (Papineni et al., 2002) ",
"cite_spans": [
{
"start": 141,
"end": 158,
"text": "(Ng et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 304,
"end": 327,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 280,
"end": 288,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Teacher system",
"sec_num": "2.1"
},
{
"text": "We limit our training data to the WMT 2019 English-German translation task 3 . We use the following data to be translated by the Facebook's WMT 2019 teacher system: (a) English part of the bilingual data, (b) English part of ParaCrawl v3, and (c) English monolingual data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": "2.2"
},
{
"text": "Before translation, data is cleaned following several rules: sentences that are empty or longer than 100 tokens without considering tokenization are filtered out. We also use the language identification (LID) toolkit langid (Lui and Baldwin, 2012) to further clean ParaCrawl and the English monolingual corpora which are known to contain a large number of noisy sentences. Nearly 5% of the sentences are discarded by the LID toolkit.",
"cite_spans": [
{
"start": 224,
"end": 247,
"text": "(Lui and Baldwin, 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": "2.2"
},
{
"text": "The cleaned data is then translated by the teacher model and the resulting synthesized parallel data is used to train the student systems 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": "2.2"
},
{
"text": "We build a joint subword segmentation model from the synthesized parallel data using SentencePiece (Kudo and Richardson, 2018) . The vocabulary size is set to 32, 000 tokens. We removed the non-latin characters before building the vocabulary.",
"cite_spans": [
{
"start": 99,
"end": 126,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary",
"sec_num": "2.3"
},
{
"text": "We train 4 different student systems based on the Transformer architecture (Vaswani et al., 2017) . The candidate configurations are presented in Table 3. In addition to the base Transformer configuration, we train 3 model variants with different number of encoder layers N Enc , decoder layers N Dec , hidden size d model , and feed-forward network size d f f . We share both the source and target word embeddings and softmax weights in the 3 variants while the base configuration considers them as separate weights.",
"cite_spans": [
{
"start": 75,
"end": 97,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Student models",
"sec_num": "2.4"
},
{
"text": "Since the amount of synthetic data is relatively large, we define an epoch as a random sampling of 5M sentences. We set the sampling weights of the selected data (a), (b), and (c) to 5, 2, and 2 respectively. That is, we consider a larger number of sentences synthesized from the English part of the bilingual data than from ParaCrawl or from the monolingual English data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Student training",
"sec_num": "2.5"
},
{
"text": "We use the OpenNMT-tf 5 toolkit to train our student systems. Training is run on a single NVIDIA Tesla V100 GPU with an effective batch size of 25,000 tokens for the early epochs. Just before the final release, we train 10 additional epochs with a larger batch size by increasing the gradient update delay by a factor of 16 . Figure 1 shows the comparison with a larger batch size. We achieve an additional 0.1 to 0.2 BLEU using this technique. Finally, we average the weights of the last 10 checkpoints to produce the final models. ",
"cite_spans": [],
"ref_spans": [
{
"start": 326,
"end": 334,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Student training",
"sec_num": "2.5"
},
{
"text": "We list the number of parameters of the 4 trained models in Table 3 and their evaluation scores on the English-German newstest2018 and newstest2019 before any inference optimizations. The results correlate well with the expectation that more model parameters lead to better performance. The base Transformer model achieves better results on new-stest2019 than the Facebook's WMT 2019 model used as a teacher (43.0 vs. 42.1). This confirms the finding in Crego and Senellart (2016) that student systems can sometimes outperform their corresponding teacher networks.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.6"
},
{
"text": "All models are converted and executed with CTrans-late2. We use the version 1.10.0 of the library.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference optimizations",
"sec_num": "3"
},
{
"text": "CTranslate2 is a standalone C++ library that implements the complete logic of executing and decod-ing neural machine translation models with a focus on Transformer variants. This custom implementation supports CPU and GPU execution with the goal of being faster, lighter, and more customizable than a general-purpose deep learning framework. Key features of this project include model quantization, parallel translations, dynamic memory usage, and interactive decoding. Some of these features are difficult to implement effectively with standard deep learning frameworks and are the motivation for this project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CTranslate2 technical overview",
"sec_num": "3.1"
},
{
"text": "The CPU runtime is backed by Intel MKL, a popular math computation library optimized for Intel processors. We specialize operators with BLAS routines and Vector Mathematical functions whenever possible to benefit from vectorization. We also use the caching allocator provided by mkl malloc and align allocated memory to 64 bytes. Other operations not available in Intel MKL are implemented in plain C++ using the STL and OpenMP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CTranslate2 technical overview",
"sec_num": "3.1"
},
{
"text": "The GPU runtime minimally requires the cuBLAS and Thrust libraries. Basic transformations are defined using Thrust while more complex layers such as layer normalization and softmax are using CUDA kernels ported from PyTorch (Paszke et al., 2019) . We also integrate a caching allocator from the CUB library to reuse previously allocated buffers and minimize device synchronization.",
"cite_spans": [
{
"start": 224,
"end": 245,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CTranslate2 technical overview",
"sec_num": "3.1"
},
{
"text": "Quantization is a standard technique to reduce the model size in memory and accelerate its execution. We quantize the weights of linear and embedding layers to 8-bit signed integers after completing training. Experimental results show that model quantization can achieve high translation accuracy without making the training quantization-aware. We use the equation from Wu et al. (2016) ",
"cite_spans": [
{
"start": 370,
"end": 386,
"text": "Wu et al. (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "8-bit quantization (CPU)",
"sec_num": "3.2"
},
{
"text": "s i = max j |W i,j | W Q i,j = 127 s i W i,j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8-bit quantization (CPU)",
"sec_num": "3.2"
},
{
"text": "(1) Table 4 shows the effect of weight quantization on the final model size.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 11,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "8-bit quantization (CPU)",
"sec_num": "3.2"
},
{
"text": "On CPU, we dynamically quantize the input of the linear layer using Equation 1, multiply the quantized input and weight with MKL's cblas gemm s8u8s32 function, and dequantize the result before adding the bias term. In addition, we employ two notable techniques:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8-bit quantization (CPU)",
"sec_num": "3.2"
},
{
"text": "Weights pre-packing. On model load, we replace the quantized linear weights with the packed representation returned by MKL's packed GEMM API.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8-bit quantization (CPU)",
"sec_num": "3.2"
},
{
"text": "Unsigned compensation term. In row major mode, Intel MKL expects the input matrix a to be unsigned while the quantization Equation 1 produces signed values. To overcome this constraint, we shift a to the 8-bit unsigned domain and add a compensation term c to the output matrix. This compensation term only depends on the quantized weight matrix and can be computed once:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8-bit quantization (CPU)",
"sec_num": "3.2"
},
{
"text": "c i = \u2212128 \u00d7 k j=1 W Q i,j (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8-bit quantization (CPU)",
"sec_num": "3.2"
},
{
"text": "On GPU, 8-bit computation is disabled as our implementation still requires some efficiency improvements regarding repetitive quantization and dequantization. In this case the weights are dequantized on load to single precision floating points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8-bit quantization (CPU)",
"sec_num": "3.2"
},
{
"text": "To maximize speed and reduce memory usage, we use greedy search instead of beam search. During decoding, we also skip the final softmax layer and simply get the maximum from the output logits. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Greedy decoding",
"sec_num": "3.3"
},
{
"text": "We apply the common technique of caching linear projections in the Transformer decoder layers. In particular, at step t the decoder self-attention layers compute Attention",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder projections caching",
"sec_num": "3.4"
},
{
"text": "(Q t W Q , Q 1..t W K , Q 1..t W V ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder projections caching",
"sec_num": "3.4"
},
{
"text": "As the matrix Q 1..t\u22121 is constant, we only compute Q t W K and Q t W V and concatenate the results to previous projections before calling the attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder projections caching",
"sec_num": "3.4"
},
{
"text": "We also cache the encoder output projections KW K and V W V in the encoder-decoder attention layers as K and V remain constant during decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder projections caching",
"sec_num": "3.4"
},
{
"text": "For both cases, we transpose the matrices to delimit the attention heads before saving them in the cache. Figure 2 compares the observed speedup when increasing the number of threads at the batch levelthe number of OpenMP threads-or at the file levelthe number of batches processed in parallel. We use the same batch size in both cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Decoder projections caching",
"sec_num": "3.4"
},
{
"text": "As the number of threads increases, the first approach looses efficiency because not all operators within the model scale linearly and some of them are not parallelized at all. On the other hand, the second approach continues to improve as we add more threads because all batches are independent and the full decoding can be executed in parallel. However, the duplicated internal state of parallel translators increases memory usage. To mitigate this issue, we share the static model data among all parallel translators and read and write batches in a streaming manner while ensuring that the original order is preserved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "File-level parallelism (CPU)",
"sec_num": "3.5"
},
{
"text": "Given the large number of CPU cores available for this task, we chose to exploit parallelism at the file level to maximize the overall throughput. The number of parallel translators is set to the number of physical cores. Each translator is using a single thread so the decoding algorithm is executed sequentially and without OpenMP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "File-level parallelism (CPU)",
"sec_num": "3.5"
},
{
"text": "When setting the maximum batch size to N tokens, each consumer reads 8N contiguous tokens, sorts the sentences from the longest to the shortest, and then splits by batch of N tokens before running the model. The correct order is restored when returning the translation results. This local sorting makes the batches contain sentences of similar sizes which reduces the amount of padding and increases the computation efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sorted and dynamic batches",
"sec_num": "3.6"
},
{
"text": "We use N = 6000 for the GPU task, N = 512 for the single-core CPU task, and N = 256 for the multi-core CPU task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sorted and dynamic batches",
"sec_num": "3.6"
},
{
"text": "During decoding we remove finished translations from the batch to avoid unnecessary computation. We also exploit the prior knowledge that short sentences finish early: by moving shorter sentences at then end of the batch, we reduce memory copies when updating the decoder cache in place.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sorted and dynamic batches",
"sec_num": "3.6"
},
{
"text": "We generate a static source-target vocabulary mapping using the technique described in . We first train an alignment model with fast align to align source and target words. To increase the coverage of this mapping, we build a phrase table from these alignments to extract the N -best translation hypotheses of 1-gram, 2-gram, ..., n-gram source sequences and include all target words in the mapping. We set n = 3 to generate the vocabulary mapping that are included in the models of this submission.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target vocabulary reduction",
"sec_num": "3.7"
},
{
"text": "During decoding, we consider all 1-gram, 2gram, and 3-gram sequences in the input batch and select the target tokens that are likely to appear in the translation according to the pretrained mapping as well as the 50 most frequent target tokens. These candidates are used to mask the weights of the final linear layer and effectively reduce its computational cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target vocabulary reduction",
"sec_num": "3.7"
},
{
"text": "The Docker images entrypoint is a small C++ main function that wraps the CTranslate2 and Sentence-Piece libraries and sets the decoding options that are relevant for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Docker images",
"sec_num": "3.8"
},
{
"text": "We submit separate Docker images for CPU and GPU to only include the required dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Docker images",
"sec_num": "3.8"
},
{
"text": "The images are based respectively on ubuntu:18.04 and nvidia/cuda:10.2-base-ubuntu18.04. Without the model, the CPU image size is 104MB and the GPU image size is 210MB. Table 5 shows the impact of selected optimizations when decoding a base Transformer model on a single CPU core. The CTranslate2 library combined with few optimizations can lead to a 8\u00d7 speedup with limited accuracy loss over a baseline Tensor-Flow program.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Docker images",
"sec_num": "3.8"
},
{
"text": "Finally, Table 6 summarizes the global impact of the optimizations described above that we compare against a baseline beam search decoding with OpenNMT-tf. For a base Transformer model, single-core CPU translation is 13\u00d7 faster while only loosing 0.8 BLEU points and GPU translation is 7\u00d7 faster for the same quality. 3 2xFFN) 4.0 40.1 (6:3) 3.9 39.9 (4:3)",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 6",
"ref_id": null
},
{
"start": 318,
"end": 326,
"text": "3 2xFFN)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Optimization results",
"sec_num": "4"
},
{
"text": "3.8 39.0 Table 6 : Time in seconds to translate newstest2019 and BLEU scores as returned by SacreBLEU. The time includes model loading and tokenization. Baseline models are decoded with OpenNMT-tf using a beam of size 4; Optimized models are decoded with the final images submitted for this task. The runs were executed on a c5.metal AWS instance for CPU and a g4dn.xlarge instance for GPU.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Optimization results",
"sec_num": "4"
},
{
"text": "We demonstrated that the OpenNMT ecosystem can be used to train efficient and high-quality neural machine translation models. The training frameworks-OpenNMT-tf and OpenNMT-pyinclude all features and procedures that are commonly applied to reach competitive translation scores. This year we presented CTranslate2, an optimized and production-grade inference engine for OpenNMT models that enables fast CPU and GPU decoding with few dependencies. By combining several optimizations and parallelization techniques, the library can drastically improve decod-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://matrix.statmt.org/ 3 http://statmt.org/wmt19/ translation-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Due to the long decoding time of the teacher system, the English monolingual data was partially translated. The final data pool used for training consists of: (a) 7.4M bilingual data, (b) 26.1M ParaCrawl data, and (c) 127M English monolingual data.5 https://github.com/OpenNMT/OpenNMT-tf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The difference in BLEU score with the single-core runs comes from the smaller batch size which changes the candidates selected for reducing the target vocabulary.ing speed and reduce memory usage over a generalpurpose deep learning toolkit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation from simplified translations",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Josep",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Crego",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Senellart",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josep Maria Crego and Jean Senellart. 2016. Neu- ral machine translation from simplified translations. CoRR, abs/1612.06139.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Understanding back-translation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1045"
]
},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Analyzing word translation of transformer layers",
"authors": [
{
"first": "Xy",
"middle": [],
"last": "Hongfei",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "Liu",
"middle": [],
"last": "Qiuhui",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.09586v1"
]
},
"num": null,
"urls": [],
"raw_text": "Xy Hongfei, Deyi Xiong, Joseph van Genabith, and Liu Qiuhui. 2020. Analyzing word translation of trans- former layers. arXiv preprint arXiv:2003.09586v1.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sequencelevel knowledge distillation",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1317--1327",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1139"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1317-1327, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "OpenNMT: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "2012. langid.py: An off-the-shelf language identification tool",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the ACL 2012 System Demonstrations",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Pro- ceedings of the ACL 2012 System Demonstrations, pages 25-30, Jeju Island, Korea. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Facebook fair wmt19 news translation task submission",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Kyra",
"middle": [],
"last": "Yee",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "314--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook fair wmt19 news translation task submission. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 314-319, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Scaling neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6301"
]
},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "OpenNMT system description for WNMT 2018: 800 words/sec on a single-core CPU",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Dakun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Jean-Pierre",
"middle": [],
"last": "Ramatchandirin",
"suffix": ""
},
{
"first": "Josep",
"middle": [],
"last": "Crego",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "122--128",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2715"
]
},
"num": null,
"urls": [],
"raw_text": "Jean Senellart, Dakun Zhang, Bo Wang, Guillaume Klein, Jean-Pierre Ramatchandirin, Josep Crego, and Alexander Rush. 2018. OpenNMT system de- scription for WNMT 2018: 800 words/sec on a single-core CPU. In Proceedings of the 2nd Work- shop on Neural Machine Translation and Genera- tion, pages 122-128, Melbourne, Australia. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Oriol Vinyals",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, \u0141ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Analyzing knowledge distillation in neural machine translation",
"authors": [
{
"first": "Dakun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Josep",
"middle": [],
"last": "Crego",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
}
],
"year": 2018,
"venue": "15th International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dakun Zhang, Josep Crego, and Jean Senellart. 2018. Analyzing knowledge distillation in neural machine translation. In 15th International Workshop on Spo- ken Language Translation.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploiting source-side monolingual data in neural machine translation",
"authors": [
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1160"
]
},
"num": null,
"urls": [],
"raw_text": "Jiajun Zhang and Chengqing Zong. 2016. Exploit- ing source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1535-1545, Austin, Texas. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "BLEU evaluations on larger batch size on newstest2018.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Speedup and memory usage for a base Transformer model when increasing the number of threads for batch translation, either at the batch level (left blue bars) or at the file level (right red bars).",
"uris": null
},
"TABREF1": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>sum-</td></tr></table>",
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>: English-German parallel data and English</td></tr><tr><td>monolingual data provided by the WMT 2019 transla-</td></tr><tr><td>tion task.</td></tr></table>",
"num": null
},
"TABREF4": {
"type_str": "table",
"text": "Transformer configurations and their BLEU scores on newstest2018 and newstest2019. Evaluation is performed without inference optimizations using OpenNMT-tf and a beam size of 4.",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF6": {
"type_str": "table",
"text": "Effect of weight quantization on the model size on disk. The model is a base Transformer without shared embeddings.",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF8": {
"type_str": "table",
"text": "Single-core greedy decoding speed (target tokens per second) for a base Transformer model. The BLEU scores are computed on an undisclosed test set and show the impact on quality (if any) of the enabled optimization.",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}