{
"paper_id": "I17-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:39:48.570536Z"
},
"title": "What does Attention in Neural Machine Translation Pay Attention to?",
"authors": [
{
"first": "Hamidreza",
"middle": [],
"last": "Ghader",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {
"country": "The Netherlands"
}
},
"email": "h.ghader@uva.nl"
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {
"country": "The Netherlands"
}
},
"email": "c.monz@uva.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Attention in neural machine translation provides the possibility to encode relevant parts of the source sentence at each translation step. As a result, attention is considered to be an alignment model as well. However, there is no work that specifically studies attention and provides analysis of what is being learned by attention models. Thus, the question still remains that how attention is similar or different from the traditional alignment. In this paper, we provide detailed analysis of attention and compare it to traditional alignment. We answer the question of whether attention is only capable of modelling translational equivalent or it captures more information. We show that attention is different from alignment in some cases and is capturing useful information other than alignments.",
"pdf_parse": {
"paper_id": "I17-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "Attention in neural machine translation provides the possibility to encode relevant parts of the source sentence at each translation step. As a result, attention is considered to be an alignment model as well. However, there is no work that specifically studies attention and provides analysis of what is being learned by attention models. Thus, the question still remains that how attention is similar or different from the traditional alignment. In this paper, we provide detailed analysis of attention and compare it to traditional alignment. We answer the question of whether attention is only capable of modelling translational equivalent or it captures more information. We show that attention is different from alignment in some cases and is capturing useful information other than alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT) has gained a lot of attention recently due to its substantial improvements in machine translation quality achieving state-of-the-art performance for several languages (Luong et al., 2015b; Jean et al., 2015; Wu et al., 2016) . The core architecture of neural machine translation models is based on the general encoder-decoder approach (Sutskever et al., 2014) . Neural machine translation is an end-toend approach that learns to encode source sentences into distributed representations and decode these representations into sentences in the target language. Among the different neural MT models, attentional NMT (Bahdanau et al., 2015; Luong et al., 2015a) has become popular due to its capability to use the most relevant parts of the source sentence at each translation step. This capability also makes the attentional model superior in translating longer sentences (Bahdanau et al., 2015; Luong et al., 2015a) . Figure 1 : Visualization of the attention paid to the relevant parts of the source sentence for each generated word of a translation example. See how the attention is 'smeared out' over multiple source words in the case of \"would\" and \"like\". Figure 1 shows an example of how attention uses the most relevant source words to generate a target word at each step of the translation. In this paper we focus on studying the relevance of the attended parts, especially cases where attention is 'smeared out' over multiple source words where their relevance is not entirely obvious, see, e.g., \"would\" and \"like\" in Figure 1 . Here, we ask whether these are due to errors of the attention mechanism or are a desired behavior of the model.",
"cite_spans": [
{
"start": 200,
"end": 221,
"text": "(Luong et al., 2015b;",
"ref_id": "BIBREF10"
},
{
"start": 222,
"end": 240,
"text": "Jean et al., 2015;",
"ref_id": "BIBREF6"
},
{
"start": 241,
"end": 257,
"text": "Wu et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 368,
"end": 392,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 645,
"end": 668,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 669,
"end": 689,
"text": "Luong et al., 2015a)",
"ref_id": "BIBREF9"
},
{
"start": 901,
"end": 924,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 925,
"end": 945,
"text": "Luong et al., 2015a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 948,
"end": 956,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1191,
"end": 1199,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1558,
"end": 1566,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the introduction of attention models in neural machine translation (Bahdanau et al., 2015) various modifications have been proposed (Luong et al., 2015a; Cohn et al., 2016; Liu et al., 2016) . However, to the best of our knowledge there is no study that provides an analysis of what kind of phenomena is being captured by attention. There are some works that have looked to attention as being similar to traditional word alignment (Alkhouli et al., 2016; Cohn et al., 2016; Liu et al., 2016; . Some of these approaches also experimented with training the attention model using traditional alignments (Alkhouli et al., 2016; Liu et al., 2016; . Liu et al. (2016) have shown that attention could be seen as a reordering model as well as an alignment model.",
"cite_spans": [
{
"start": 73,
"end": 96,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 138,
"end": 159,
"text": "(Luong et al., 2015a;",
"ref_id": "BIBREF9"
},
{
"start": 160,
"end": 178,
"text": "Cohn et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 179,
"end": 196,
"text": "Liu et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 437,
"end": 460,
"text": "(Alkhouli et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 461,
"end": 479,
"text": "Cohn et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 480,
"end": 497,
"text": "Liu et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 606,
"end": 629,
"text": "(Alkhouli et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 630,
"end": 647,
"text": "Liu et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 650,
"end": 667,
"text": "Liu et al. (2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on investigating the differences between attention and alignment and what is being captured by the attention mechanism in general. The questions that we are aiming to answer include: Is the attention model only capable of modelling alignment? And how similar is attention to alignment in different syntactic phenomena?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our analysis shows that attention models traditional alignment in some cases more closely while it captures information beyond alignment in others. For instance, attention agrees with traditional alignments to a high degree in the case of nouns. However, it captures other information rather than only the translational equivalent in the case of verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper makes the following contributions: 1) We provide a detailed comparison of attention in NMT and word alignment. 2) We show that while different attention mechanisms can lead to different degrees of compliance with respect to word alignments, global compliance is not always helpful for word prediction. 3) We show that attention follows different patterns depending on the type of the word being generated. 4) We demonstrate that attention does not always comply with alignment. We provide evidence showing that the difference between attention and alignment is due to attention model capability to attend the context words influencing the current word translation. Liu et al. (2016) investigate how training the attention model in a supervised manner can benefit machine translation quality. To this end they use traditional alignments obtained by running automatic alignment tools (GIZA++ (Och and Ney, 2003) and fast align (Dyer et al., 2013) ) on the training data and feed it as ground truth to the attention network. They report some improvements in translation quality arguing that the attention model has learned to better align source and target words. The approach of training attention using traditional alignments has also been proposed by others Alkhouli et al., 2016) . show that guided attention with traditional alignment helps in the domain of e-commerce data which includes lots of out of vocabulary (OOV) product names and placeholders, but not much in the other domains. Alkhouli et al. (2016) have separated the alignment model and translation model, reasoning that this avoids propagation of errors from one model to the other as well as providing more flexibility in the model types and training of the models. They use a feed-forward neural network as their alignment model that learns to model jumps in the source side using HMM/IBM alignments obtained by using GIZA++. Shi et al. (2016) show that various kinds of syntactic information are being learned and encoded in the output hidden states of the encoder. The neural system for their experimental analysis is not an attentional model and they argue that attention does not have any impact for learning syntactic information. However, performing the same analysis for morphological information, Belinkov et al. (2017) show that attention has also some effect on the information that the encoder of neural machine translation system encodes in its output hidden states. As part of their analysis they show that a neural machine translation system that has an attention model can learn the POS tags of the source side more efficiently than a system without attention.",
"cite_spans": [
{
"start": 676,
"end": 693,
"text": "Liu et al. (2016)",
"ref_id": "BIBREF8"
},
{
"start": 893,
"end": 920,
"text": "(GIZA++ (Och and Ney, 2003)",
"ref_id": null
},
{
"start": 936,
"end": 955,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF5"
},
{
"start": 1269,
"end": 1291,
"text": "Alkhouli et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 1501,
"end": 1523,
"text": "Alkhouli et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 1905,
"end": 1922,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 2284,
"end": 2306,
"text": "Belinkov et al. (2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, Koehn and Knowles (2017) carried out a brief analysis of how much attention and alignment match in different languages by measuring the probability mass that attention gives to alignments obtained from an automatic alignment tool. They also report differences based on the most attended words. ",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "Koehn and Knowles (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This section provides a short background on attention and discusses two most popular attention models which are also used in this paper. The first model is a non-recurrent attention model which is equivalent to the \"global attention\" method proposed by Luong et al. (2015a) . The second attention model that we use in our investigation is an input-feeding model similar to the attention model first proposed by Bahdanau et al. (2015) and turned to a more general one and called inputfeeding by Luong et al. (2015a) . Below we describe the details of both models.",
"cite_spans": [
{
"start": 253,
"end": 273,
"text": "Luong et al. (2015a)",
"ref_id": "BIBREF9"
},
{
"start": 411,
"end": 433,
"text": "Bahdanau et al. (2015)",
"ref_id": "BIBREF1"
},
{
"start": 494,
"end": 514,
"text": "Luong et al. (2015a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Models",
"sec_num": "3"
},
{
"text": "Both non-recurrent and input-feeding models compute a context vector c i at each time step. Subsequently, they concatenate the context vector to the hidden state of decoder and pass it through a non-linearity before it is fed into the softmax output layer of the translation network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = tanh(W c [c t ; h t ])",
"eq_num": "(1)"
}
],
"section": "Attention Models",
"sec_num": "3"
},
{
"text": "The difference of the two models lays in the way they compute the context vector. In the nonrecurrent model, the hidden state of the decoder is compared to each hidden state of the encoder. Often, this comparison is realized as the dot product of vectors. Then the comparison result is fed to a softmax layer to compute the attention weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e t,i = h T i h t (2) \u03b1 t,i = exp(e t,i ) |x| j=1 exp(e t,j )",
"eq_num": "(3)"
}
],
"section": "Attention Models",
"sec_num": "3"
},
{
"text": "Here h t is the hidden state of the decoder at time t, h i is ith hidden state of the encoder and |x| is the length of the source sentence. Then the computed alignment weights are used to compute a weighted sum over the encoder hidden states which results in the context vector mentioned above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Models",
"sec_num": "3"
},
{
"text": "c i = |x| i=1 \u03b1 t,i h i (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Models",
"sec_num": "3"
},
{
"text": "The input-feeding model changes the context vector computation in a way that at each step t the context vector is aware of the previously computed context c t\u22121 . To this end, the input-feeding model feeds back its ownh t\u22121 to the network and uses the resulting hidden state instead of the contextindependent h t , to compare to the hidden states of RWTH data # of sentences 508 # of alignments 10534 % of sure alignments 91% % of possible alignments 9% the encoder. This is defined in the following equations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = f (W [h t\u22121 ; y t\u22121 ]) (5) e t,i = h T i h t",
"eq_num": "(6)"
}
],
"section": "Attention Models",
"sec_num": "3"
},
{
"text": "Here, f is the function that the stacked LSTM applies to the input, y t\u22121 is the last generated target word, andh t\u22121 is the output of previous time step of the input-feeding network itself, meaning the output of Equation 1 in the case that context vector has been computed using e t,i from Equation 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Models",
"sec_num": "3"
},
{
"text": "As mentioned above, it is a commonly held assumption that attention corresponds to word alignments. To verify this, we investigate whether higher consistency between attention and alignment leads to better translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Attention with Alignment",
"sec_num": "4"
},
{
"text": "In order to compare attentions of multiple systems as well as to measure the difference between attention and word alignment, we convert the hard word alignments into soft ones and use cross entropy between attention and soft alignment as a loss function. For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments. The statistics of the data are given in Table 1 . We convert the hard alignments to soft alignments using Equation 7. For unaligned words, we first assume that they have been aligned to all the words in the source side and then do the conversion.",
"cite_spans": [],
"ref_spans": [
{
"start": 403,
"end": 410,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Measuring Attention-Alignment Accuracy",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Al(x i , y t ) = 1 |Ay t | if x i \u2208 A yt 0 otherwise",
"eq_num": "(7)"
}
],
"section": "Measuring Attention-Alignment Accuracy",
"sec_num": "4.1"
},
{
"text": "Here A yt is the set of source words aligned to target word y t and |A yt | is the number of source words in the set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Attention-Alignment Accuracy",
"sec_num": "4.1"
},
{
"text": "After conversion of the hard alignments to soft ones, we compute the attention loss as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Attention-Alignment Accuracy",
"sec_num": "4.1"
},
{
"text": "L At (y t ) = \u2212 |x| i=1 Al(x i , y t ) log(At(x i , y t )) (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Attention-Alignment Accuracy",
"sec_num": "4.1"
},
{
"text": "Here x is the source sentence and Al(x i , y t ) is the weight of the alignment link between source word x i and the target word (see Equation 7). At(x i , y t ) is the attention weight \u03b1 t,i (see Equation 3) of the source word x i , when generating the target word y t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Attention-Alignment Accuracy",
"sec_num": "4.1"
},
{
"text": "In our analysis, we also look into the relation between translation quality and the quality of the attention with respect to the alignments. For measuring the quality of attention, we use the attention loss defined in Equation 8. As a measure of translation quality, we choose the loss between the output of our NMT system and the reference translation at each translation step, which we call word prediction loss. The word prediction loss for word y t is logarithm of the probability given in Equation 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Attention-Alignment Accuracy",
"sec_num": "4.1"
},
{
"text": "p nmt (y t | y