{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:29:39.033438Z" }, "title": "Self Attended Stack Pointer Networks for Learning Long Term Dependencies", "authors": [ { "first": "Salih", "middle": [], "last": "Tu\u00e7", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hacettepe University", "location": { "country": "Turkey" } }, "email": "salihtuc0@gmail.com" }, { "first": "Burcu", "middle": [], "last": "Can", "suffix": "", "affiliation": {}, "email": "b.can@wlv.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a novel deep neural architecture for dependency parsing, which is built upon a Transformer Encoder (Vaswani et al., 2017) and a Stack Pointer Network (Ma et al., 2018). We first encode each sentence using a Transformer Network and then the dependency graph is generated by a Stack Pointer Network by selecting the head of each word in the sentence through a head selection process. We evaluate our model on Turkish and English treebanks. The results show that our trasformer-based model learns long term dependencies efficiently compared to sequential models such as recurrent neural networks. Our self attended stack pointer network improves UAS score around 6% upon the LSTM based stack pointer (Ma et al., 2018) for Turkish sentences with a length of more than 20 words.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We propose a novel deep neural architecture for dependency parsing, which is built upon a Transformer Encoder (Vaswani et al., 2017) and a Stack Pointer Network (Ma et al., 2018). We first encode each sentence using a Transformer Network and then the dependency graph is generated by a Stack Pointer Network by selecting the head of each word in the sentence through a head selection process. We evaluate our model on Turkish and English treebanks. The results show that our trasformer-based model learns long term dependencies efficiently compared to sequential models such as recurrent neural networks. Our self attended stack pointer network improves UAS score around 6% upon the LSTM based stack pointer (Ma et al., 2018) for Turkish sentences with a length of more than 20 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Dependency Parsing is the task of finding the grammatical structure of a sentence by identifying syntactic and semantic relationships between words. Dependency parsing has been utilized in many other NLP tasks such as machine translation (Carreras and Collins, 2009; Chen et al., 2017) , relation extraction (Fundel-Clemens et al., 2007; Zhang et al., 2018) , named entity recognition (Jie et al., 2017; Finkel and Manning, 2009) , information extraction (Angeli et al., 2015; Peng et al., 2017) , all of which involve natural language understanding to an extent. Each dependency relation is identified between a head word and a dependent word that modifies the head word in a sentence. Although such relations are considered syntactic, they are naturally built upon semantic relationships between words. For example, each dependent has a role in modifying its head word, which is a result of a semantic influence.", "cite_spans": [ { "start": 238, "end": 266, "text": "(Carreras and Collins, 2009;", "ref_id": "BIBREF7" }, { "start": 267, "end": 285, "text": "Chen et al., 2017)", "ref_id": "BIBREF9" }, { "start": 308, "end": 337, "text": "(Fundel-Clemens et al., 2007;", "ref_id": "BIBREF18" }, { "start": 338, "end": 357, "text": "Zhang et al., 2018)", "ref_id": "BIBREF50" }, { "start": 385, "end": 403, "text": "(Jie et al., 2017;", "ref_id": "BIBREF24" }, { "start": 404, "end": 429, "text": "Finkel and Manning, 2009)", "ref_id": "BIBREF16" }, { "start": 455, "end": 476, "text": "(Angeli et al., 2015;", "ref_id": "BIBREF2" }, { "start": 477, "end": 495, "text": "Peng et al., 2017)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Within the context of dependency parsing, relations between heads and dependents are also la-beled by specifying the type of the grammatical relation between words. In the Universal Dependencies (de Marneffe et al., 2014) tagset, there are 37 dependency relation types defined. In the latest Universal Dependencies (UD v2.0) tagset, relations are split into four main categories (Core Arguments, Non-core dependents, Nominal dependents and Other) and nine sub-categories (Nominals, Clauses, Modifier Words, Function Words, Coordination, MWE, Loose Special and Other) .", "cite_spans": [ { "start": 195, "end": 221, "text": "(de Marneffe et al., 2014)", "ref_id": "BIBREF34" }, { "start": 471, "end": 566, "text": "(Nominals, Clauses, Modifier Words, Function Words, Coordination, MWE, Loose Special and Other)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One way to illustrate the grammatical structure obtained from dependency parsing is a dependency graph. An example dependency graph is given below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Thank you , Mr. Poettering . Here, the relations are illustrated by the links from head words to dependent words along with their dependency labels. Every sentence has a global head word, which is the ROOT of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are two main difficulties in dependency parsing. One is the long term dependencies in especially long sentences that are difficult to be identified in a standard Recurrent Neural Network due to the loss of the information flow in long sequences. Another difficulty in parsing is the outof-vocabulary (OOV) words. In this work, we try to tackle these two problems by using Transformer Networks (Vaswani et al., 2017) by introducing subword information for OOV words in especially morphologically rich languages such as Turkish. For that purpose, we integrate character-level word embeddings obtained from Convolutional Neural Networks (CNNs). The morphological complexity in such agglutinative languages makes the parsing task even harder because of the sparsity problem due to the number of suffixes that each word can take, which brings more problems in syntactic parsing. Dependencies in such languages were also defined between morphemic units (i.e. inflectional groups) rather than word tokens (Eryigit et al., 2008) , however this is not in the scope of this work.", "cite_spans": [ { "start": 399, "end": 421, "text": "(Vaswani et al., 2017)", "ref_id": null }, { "start": 1004, "end": 1026, "text": "(Eryigit et al., 2008)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we introduce a novel two-level deep neural architecture for graph-based dependency parsing. Graph-based dependency parsers build dependency trees among all possible trees, therefore the final dependency tree has the highest score globally. However, in transition-based dependency parsers, each linear selection in a sentence is made based on a local score which may lead to erroneous trees at the end of parsing. For this reason, we prefer graph-based dependency parsing in our approach to be able to do global selections while building dependency trees. In the first level of our deep neural architecture, we encode each sentence through a transformer network (Vaswani et al., 2017) , which shows superior performance in long sequences compared to standard recurrent neural networks (RNNs). In the second level, we decode the dependencies between heads and dependents using a Stack Pointer Network (Ma et al., 2018) , which is extended with an internal stack based on pointer networks (Vinyals et al., 2015) . Since stack pointer networks benefit from the full sequence similar to self attention mechanism in transformer networks, they do not have left-to-right restriction as in transition based parsing. Hence, we combine the two networks to have a more accurate and efficient dependency parser. We evaluate our model on Turkish which is a morphologically rich language and on English with a comparably poorer morphological structure. Although our model does not outperform other recent model, it shows competitive performance among other neural dependency parsers. However, our results show that our self attended stack pointer network improves UAS score around 6% upon the LSTM based stack pointer (Ma et al., 2018) for Turkish sentences with a length of more than 20 words.", "cite_spans": [ { "start": 675, "end": 697, "text": "(Vaswani et al., 2017)", "ref_id": null }, { "start": 913, "end": 930, "text": "(Ma et al., 2018)", "ref_id": "BIBREF32" }, { "start": 1000, "end": 1022, "text": "(Vinyals et al., 2015)", "ref_id": null }, { "start": 1717, "end": 1734, "text": "(Ma et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows: Section 2 reviews the related work on both graph-based and transition-based dependency parsing, Section 3 explains the dependency parsing task briefly, Section 4 describes the proposed deep neural architecture ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dependency parsing is performed by two different approaches: graph-based and transition-based parsing. We review related work on both of these approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Graph-based Dependency Parsing: Graphbased approaches are generally based on performing the entire parsing process as graph operations where the nodes in the graph represent the words in a sentence. For the sentence, \"John saw Mary\", we can illustrate its parse tree with a weighted graph G with four vertices where each of them refers to a word including the ROOT . Edges store the dependency scores between the words. The main idea here is to find the maximum spanning tree of this graph G. The parse tree of the sentence is given in Figure 1 . The dependencies are between ROOT and saw, saw and John; and saw and M ary where the first ones are the heads and the latter ones are the dependents.", "cite_spans": [], "ref_spans": [ { "start": 536, "end": 544, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "When the parsing structure is represented as a graph, finding dependencies becomes easier to visualize, and moreover the task becomes finding the highest scored tree among all possible trees. Edge scores in the graphs represent the dependency measures between word couples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Neural architectures have been used for graphbased dependency parsing extensively in the last decade. Li et al. (2018) introduce a seq2seq model using bi-directional LSTMs (BiLSTMs) (Hochreiter and Schmidhuber, 1997), where an attention mechanism is involved between the encoder and decoder LSTMs. Kiperwasser and Goldberg (2016) propose another model using BiLSTMs, where the right and left arcs in the dependency trees are identified through the BiLSTMs. Dozat and Manning (2016) proposes a parser that uses biaffine attention mechanism, which is extended based on the models of Kiperwasser and Goldberg (2016) , Hashimoto et al. (2017) , and Cheng et al. (2016) . The biaffine parser (Dozat and Manning, 2016) provides a baseline for other two models introduced by Zhou and Zhao (2019) and Li et al. (2019) , which forms trees in the form of Head-Driven Phase Structure Grammar (HPSG) and uses self-attention mechanism respectively. Ji et al. (2019) propose a Graph Neural Network (GNN) that is improved upon the biaffine model. Another LSTM-based model is introduced by Choe and Charniak 2016, where dependency parsing is considered as part of language modelling (LM) and each sentence is parsed with a LSTM-LM architecture which builds parse trees simultaneously with the language model.", "cite_spans": [ { "start": 102, "end": 118, "text": "Li et al. (2018)", "ref_id": "BIBREF29" }, { "start": 298, "end": 329, "text": "Kiperwasser and Goldberg (2016)", "ref_id": "BIBREF25" }, { "start": 457, "end": 481, "text": "Dozat and Manning (2016)", "ref_id": "BIBREF13" }, { "start": 581, "end": 612, "text": "Kiperwasser and Goldberg (2016)", "ref_id": "BIBREF25" }, { "start": 615, "end": 638, "text": "Hashimoto et al. (2017)", "ref_id": "BIBREF20" }, { "start": 645, "end": 664, "text": "Cheng et al. (2016)", "ref_id": "BIBREF10" }, { "start": 687, "end": 712, "text": "(Dozat and Manning, 2016)", "ref_id": "BIBREF13" }, { "start": 793, "end": 809, "text": "Li et al. (2019)", "ref_id": "BIBREF28" }, { "start": 936, "end": 952, "text": "Ji et al. (2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The recent works generally focus on the encoder in seq2seq models because a better encoding of an input eliminates most of the cons of the sequence models. For example, Hewitt and Manning 2019and Tai et al. (2015) aim to improve the LSTMbased encoders while Clark et al. 2018introduce an attention-based approach to improve encoding, where they propose Cross-View Training (CVT).", "cite_spans": [ { "start": 196, "end": 213, "text": "Tai et al. (2015)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this work, we encode each sentence through a transformer network based on self-attention mechanism (Vaswani et al., 2017) and learn the head of each word using a stack pointer network as a decoder (Ma et al., 2018) in our deep neural architecture. Our main aim is to learn long term dependencies efficiently with a transformer network by removing the recurrent structures from encoder. Transformer networks (Vaswani et al., 2017) and stack pointer networks (Ma et al., 2018) have been used for dependency parsing before. However, this will be the first attempt to combine these two methods for the dependency parsing task.", "cite_spans": [ { "start": 102, "end": 124, "text": "(Vaswani et al., 2017)", "ref_id": null }, { "start": 200, "end": 217, "text": "(Ma et al., 2018)", "ref_id": "BIBREF32" }, { "start": 410, "end": 432, "text": "(Vaswani et al., 2017)", "ref_id": null }, { "start": 460, "end": 477, "text": "(Ma et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Transition-based Dependency Parsing: In transition-based dependency parsing, local selections are made for each dependency relationship without considering the complete dependency tree. Therefore, globally motivated selections are normally not performed in transition-based parsing by contrast with graph-based dependency parsing. For this purpose, two stacks are employed to keep track of the actions made during transition-based parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Similar to graph-based parsing, neural approaches have been used extensively for transitionbased parsing. Chen and Manning (2014) introduce a feed forward neural network with various extensions by utilizing single-word, word-pair and three-word features. Weiss et al. (2015) improve upon the model by Chen and Manning (2014) with a deeper neural network and with a more structured training and inference using structured perceptron with beam-search decoding. Andor et al. (2016) use also feed forward neural networks similar to others and argue that feed forward neural networks outperform RNNs in case of a global normalization rather than local normalizations as in Chen and Manning (2014) , which apply greedy parsing.", "cite_spans": [ { "start": 106, "end": 129, "text": "Chen and Manning (2014)", "ref_id": "BIBREF8" }, { "start": 255, "end": 274, "text": "Weiss et al. (2015)", "ref_id": "BIBREF48" }, { "start": 301, "end": 324, "text": "Chen and Manning (2014)", "ref_id": "BIBREF8" }, { "start": 459, "end": 478, "text": "Andor et al. (2016)", "ref_id": "BIBREF1" }, { "start": 668, "end": 691, "text": "Chen and Manning (2014)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Mohammadshahi and Henderson (2019) utilize a transformer network, in which graph features are employed as input and output embeddings to learn graph relations, thereby their novel model, Graph2Graph transformer, is introduced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Fern\u00e1ndez-Gonz\u00e1lez and G\u00f3mez-Rodr\u00edguez (2019) propose a transition-based algorithm that is similar to the stack pointer model by Ma et al. (2018) ; however, left-to-right parsing is adopted on the contrary to Ma et al. (2018) , where top-down parsing is performed. Hence, each parse tree is built in n actions for an n length sentence without requiring any additional data structure.", "cite_spans": [ { "start": 129, "end": 145, "text": "Ma et al. (2018)", "ref_id": "BIBREF32" }, { "start": 209, "end": 225, "text": "Ma et al. (2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In addition to these models, there are some works such as the greedy parser of and Kuncoro et al. (2016) , and the highperformance parser by Qi and Manning (2017) . Nivre and McDonald (2008) indicate that graphbased and transition-based parsers can be also combined by integrating their features. And several works follow this idea (Goldberg and Elhadad, 2010; Spitkovsky et al., 2010; Ma et al., 2013; Ballesteros and Bohnet, 2014; Zhang and Clark, 2008) .", "cite_spans": [ { "start": 83, "end": 104, "text": "Kuncoro et al. (2016)", "ref_id": "BIBREF27" }, { "start": 141, "end": 162, "text": "Qi and Manning (2017)", "ref_id": "BIBREF41" }, { "start": 165, "end": 190, "text": "Nivre and McDonald (2008)", "ref_id": "BIBREF38" }, { "start": 332, "end": 360, "text": "(Goldberg and Elhadad, 2010;", "ref_id": "BIBREF19" }, { "start": 361, "end": 385, "text": "Spitkovsky et al., 2010;", "ref_id": "BIBREF42" }, { "start": 386, "end": 402, "text": "Ma et al., 2013;", "ref_id": "BIBREF30" }, { "start": 403, "end": 432, "text": "Ballesteros and Bohnet, 2014;", "ref_id": "BIBREF4" }, { "start": 433, "end": 455, "text": "Zhang and Clark, 2008)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Dependency parsing is the task of inferring the grammatical structure of a sentence by identifying the relationships between words. Dependency is a head-dependent relation between words and each dependent is affected by its head. The dependencies in a dependency tree are always from the head to the dependents. The parsing, no matter which approach is used, creates a dependency tree or a graph, as we mentioned above. There are some formal conditions of this graph:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Formal Definition of Dependency Parsing", "sec_num": "3" }, { "text": "\u2022 Graph should be connected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Formal Definition of Dependency Parsing", "sec_num": "3" }, { "text": "-Each word must have a head. \u2022 Graph must be acyclic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Formal Definition of Dependency Parsing", "sec_num": "3" }, { "text": "-If there are dependencies w1 \u2192 w2 and w2 \u2192 w3; there must not be a dependency such as w3 \u2192 w1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Formal Definition of Dependency Parsing", "sec_num": "3" }, { "text": "\u2022 Each of the vertices must have one incoming edge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Formal Definition of Dependency Parsing", "sec_num": "3" }, { "text": "-Each word must only have one head. A graph that includes w1 \u2192 w2 and w3 \u2192 w2 is not allowed in a dependency graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Formal Definition of Dependency Parsing", "sec_num": "3" }, { "text": "A dependency tree is projective if there are no crossing edges on the dependency graph. Figure 2 illustrates a projective tree and Figure 3 illustrates a non-projective dependency graph. Our model deviates from the STACKPTR model with a transformer network that encodes each word with a self-attention mechanism, which will allow to learn long-term dependencies since every word's relation to all words in a sentence can be effectively processed in a transformer network on the contrary to recurrent neural networks. In sequential recurrent structures such as RNNs or LSTMs, every word's encoding contains information about only previous words in a sentence and there is always a loss in the information flow through the long sequences in those structures.", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 96, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 131, "end": 139, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "The Formal Definition of Dependency Parsing", "sec_num": "3" }, { "text": "In our transformer network, we adopt a multihead attention and a feed-forward network. Once we encode a sequence with a transformer network, we decode the sequence to predict the head of each word in that sequence by using a stack pointer network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Formal Definition of Dependency Parsing", "sec_num": "3" }, { "text": "In RNNs, each state is informed by the previous states with a sequential information flow through the states. However, in longer sequences, information passed from earlier states loses its effect on the later states in RNNs by definition. Transformer networks are effective attention-based neural network architectures (Vaswani et al., 2017) . The main idea is to replace the recurrent networks with a single transformer network which has the ability to compute the relationships between all words in a sequence with a self-attention mechanism without requiring any recurrent structure. Therefore, each word in a sequence will be informed by all other words in the sequence.", "cite_spans": [ { "start": 319, "end": 341, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Transformer Encoder", "sec_num": "4.2" }, { "text": "Learning long term dependencies in especially long sentences is still one of the challenges in dependency parsing. We employ transformer networks in order to tackle with the long term dependencies problem by eliminating the usage of recurrent neural networks while encoding each sentence during parsing. Hence, we use transformer network as an encoder to encode each word by feeding our transformer encoder with each word's pretrained word embeddings (Glove (Pennington et al., 2014) or Polyglot (Al-Rfou' et al., 2013) embeddings), part-of-speech (PoS) tag embeddings, characterlevel word embeddings obtained from CNN, and the positional encodings of each word.", "cite_spans": [ { "start": 458, "end": 483, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Transformer Encoder", "sec_num": "4.2" }, { "text": "Positional encoding (PE) is used to inject positional information for each encoded word, since there is not a sequential recurrent structure in a self attention mechanism. With the positional encoding, some relative or absolute positions of words in a sentence are utilized. The cos function is used for the odd indices and the sin function is used for even indices. The injection of the position information is performed with the sinus waves. The Figure 4 : Overview of the Self-Attended Pointer Network Model. After concatenating word embeddings, POS tag embeddings, and char-embeddings obtained from CNN, the final embedding is fed into the self-attention encoder stack. Then, embedding of the word at the top of the stack, its sibling and grandparent vectors are summed-up in order to predict the dependency head. sin function for the even indices is computed as follows:", "cite_spans": [], "ref_spans": [ { "start": 448, "end": 456, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Transformer Encoder", "sec_num": "4.2" }, { "text": "P E(x, 2i) = sin x 10000 2i/d model (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformer Encoder", "sec_num": "4.2" }, { "text": "where d model is the dimension of the word embeddings, i \u2208 [0, d model /2), and x is the position of each word where x \u2208 [0, n] in the input sequence s = (w 0 , w 1 . . . w n ). The cos function for the odd indices is computed analogously. The positional encoding is calculated for each embedding and they are summed. So the dimension d model does not change. Concatenation is also possible theoretically. However, in the input and output embeddings, the position information is included in the first few indices in the embedding. Thus, when the d model is large enough, there is no need to concatenate. The summation also meets the requirements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformer Encoder", "sec_num": "4.2" }, { "text": "The Encoder stack contains a Multi-Head Attention and a Feed-Forward Network. A Layer Normalization is applied after each of these two layers. There could be more than one encoder in the encoder stack. In this case, all of the outputs in one encoder is fed into the next encoder in the encoder stack. In our model, we performed several experiments with different number of encoder layers in the encoder stack to optimize the number of encoder layers for parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformer Encoder", "sec_num": "4.2" }, { "text": "Multi-Head Attention is evolved from Self-Attention Mechanism, which enables encoding all words using all of the words in the sentence. So it learns better relations between words compared to recurrent structures. The all-to-all encoding in self-attention mechanism is performed through query, key and value matrices. There are multiple sets of queries, keys and values that are learned in the model. Self-attention is calculated for each of these sets and a new embedding is produced. The new embeddings for each set are concatenated and multiplied with Z matrix which is a randomlyinitialized matrix in order to compute the final embeddings. Z matrix is trained jointly and multiplied with the concatenated weight matrix in order to reduce the embeddings into a single final embedding for each set. In other words, the final embedding is learnt from different contexts at the same time. It is multi-head because it learns from the head of each set. The head of each set is calculated by using self-attention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformer Encoder", "sec_num": "4.2" }, { "text": "Finally, a Feed Forward Neural Network which is basically a neural network with two linear layers and ReLU activation function is used to process the embeddings obtained from multi-head attention. It is placed at the end of the encoder because with this feed-forward neural network, we can train the embeddings with a latent space of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformer Encoder", "sec_num": "4.2" }, { "text": "Layer Normalization (Ba et al., 2016) is applied to normalize the weights and retain some form of information from the previous layers, which is performed for both Multi-Head Attention and Feed Forward Neural Network.", "cite_spans": [ { "start": 20, "end": 37, "text": "(Ba et al., 2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Transformer Encoder", "sec_num": "4.2" }, { "text": "Final output embeddings contain contextual information about the input sentence and the words in the sequence. So, the output of the Transformer Encoder is a -theoretically-more comprehensive representation of contextual information compared to the input word embeddings and also compared to the the output of a BiLSTM encoder head sibling modifier ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformer Encoder", "sec_num": "4.2" }, { "text": "Stack Pointer Network (STACKPTR) (Ma et al., 2018 ) is a transition-based structure but it still performs a global optimization over the potential dependency parse trees of a sentence. STACKPTR is based on a pointer network (PTR-NET) (Vinyals et al., 2015) but differently, a STACKPTR has a stack to store the order of head words in trees. In each step, an arc is built from a child to the head word at the top of the stack based on the attention scores obtained from a pointer network.", "cite_spans": [ { "start": 33, "end": 49, "text": "(Ma et al., 2018", "ref_id": "BIBREF32" }, { "start": 234, "end": 256, "text": "(Vinyals et al., 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "We use a Stack Pointer Network for decoding the sequence to infer the dependencies, where each word is encoded with a Transformer Network as mentioned in the previous section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "The transformer encoder outputs a hidden state vector s i for the ith word in the sequence. The hidden state vector is summed with higher-order information similar to that of Ma et al. (2018) . There are two types of higher-order information in the model: Sibling (two words that have the same parent) and grandparent/grandchild (parent of the word's parent and the child of the word's child). Figure 5 and Figure 6 shows an illustration of these high-order structures.", "cite_spans": [ { "start": 175, "end": 191, "text": "Ma et al. (2018)", "ref_id": "BIBREF32" } ], "ref_spans": [ { "start": 394, "end": 402, "text": "Figure 5", "ref_id": null }, { "start": 407, "end": 415, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "So, the input vector for the decoder is the sum of the state vector of the word on the top of the stack, its sibling and its grandparent:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b2 i = s h + s s + s g", "eq_num": "(2)" } ], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "In the decoder part, an LSTM gathers all of the contextual and higher-order information about the word at the top of stack. Normally, in the pointer networks, at each time step t, the decoder receives the input from the last step and outputs decoder hidden state h t . Therefore, an attention score is obtained as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "e t i = score(h t , s i ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "where e t is the output of the scoring function, s i is the encoder hidden state and h t is the decoder hidden state at time step t. After calculating the score for each possible output in the Biaffine attention mechanism, the final prediction is performed as follows with a softmax function to convert it into a probability distribution:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a t = sof tmax(e t )", "eq_num": "(4)" } ], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "where a t is the output probability vector for each possible child word and e t is the output vector of the scoring function. In our model, scoring function is adopted from Deep Biaffine attention mechanism (Dozat and Manning, 2016) :", "cite_spans": [ { "start": 207, "end": 232, "text": "(Dozat and Manning, 2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "e t i = h T t W s i + U t h t + V t s i + b (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "where W is the weight matrix, U and V are the weight vectors and b is the bias. Additionally, before the scoring function, an MLP is applied to the output of decoder, as proposed by Dozat and Manning (2016) to reduce the dimensionality.", "cite_spans": [ { "start": 182, "end": 206, "text": "Dozat and Manning (2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "As for the dependency labels, we also use another MLP to reduce the dimensionlity and then apply deep biaffine to score the possible labels for the word at the top of the stack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stack Pointer Network", "sec_num": "4.3" }, { "text": "We use cross-entropy loss for training the model similar to STACKPTR. The probability of a parse tree y for a given sentence x under the parameter set \u03b8 is P \u03b8 (y|x) and estimated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4.4" }, { "text": "P \u03b8 (y|x) = k i=1 P \u03b8 (p i |p ,", "html": null, "num": null, "text": "" }, "TABREF4": { "type_str": "table", "content": "
: Results for Turkish IMST Dataset (Sulubacak
et al., 2016)
ModelUAS LAS
Our Model w/ Glove93.43 91.98
Our Model w/ Polyglot94.23 92.67
", "html": null, "num": null, "text": "" }, "TABREF5": { "type_str": "table", "content": "", "html": null, "num": null, "text": "Dataset w/ Punctuation w/o Punctuation PTB 94.23 (92.67) 93.47 (91.94) IMST 76.81 (67.95) 71.96 (62.41)" } } } }