{ "paper_id": "N15-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:32:32.522696Z" }, "title": "A Transition-based Algorithm for AMR Parsing", "authors": [ { "first": "Chuan", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brandeis University", "location": {} }, "email": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brandeis University", "location": {} }, "email": "xuen@brandeis.edu" }, { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "", "affiliation": {}, "email": "sameer.pradhan@childrens.harvard.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a two-stage framework to parse a sentence into its Abstract Meaning Representation (AMR). We first use a dependency parser to generate a dependency tree for the sentence. In the second stage, we design a novel transition-based algorithm that transforms the dependency tree to an AMR graph. There are several advantages with this approach. First, the dependency parser can be trained on a training set much larger than the training set for the tree-to-graph algorithm, resulting in a more accurate AMR parser overall. Our parser yields an improvement of 5% absolute in F-measure over the best previous result. Second, the actions that we design are linguistically intuitive and capture the regularities in the mapping between the dependency structure and the AMR of a sentence. Third, our parser runs in nearly linear time in practice in spite of a worst-case complexity of O(n 2).", "pdf_parse": { "paper_id": "N15-1040", "_pdf_hash": "", "abstract": [ { "text": "We present a two-stage framework to parse a sentence into its Abstract Meaning Representation (AMR). We first use a dependency parser to generate a dependency tree for the sentence. In the second stage, we design a novel transition-based algorithm that transforms the dependency tree to an AMR graph. There are several advantages with this approach. First, the dependency parser can be trained on a training set much larger than the training set for the tree-to-graph algorithm, resulting in a more accurate AMR parser overall. Our parser yields an improvement of 5% absolute in F-measure over the best previous result. Second, the actions that we design are linguistically intuitive and capture the regularities in the mapping between the dependency structure and the AMR of a sentence. Third, our parser runs in nearly linear time in practice in spite of a worst-case complexity of O(n 2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Abstract Meaning Representation (AMR) is a rooted, directed, edge-labeled and leaf-labeled graph that is used to represent the meaning of a sentence. The AMR formalism has been used to annotate the AMR Annotation Corpus (Banarescu et al., 2013) , a corpus of over 10 thousand sentences that is still undergoing expansion. The building blocks for an AMR representation are concepts and relations between them. Understanding these concepts and their relations is crucial to understanding the meaning of a sentence and could potentially benefit a number of natural language applications such as Information Extraction, Question Answering and Machine Translation.", "cite_spans": [ { "start": 220, "end": 244, "text": "(Banarescu et al., 2013)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The property that makes AMR a graph instead of a tree is that AMR allows reentrancy, meaning that the same concept can participate in multiple relations. Parsing a sentence into an AMR would seem to require graph-based algorithms, but moving to graph-based algorithms from the typical tree-based algorithms that we are familiar with is a big step in terms of computational complexity. Indeed, quite a bit of effort has gone into developing grammars and efficient graph-based algorithms that can be used to parse AMRs (Chiang et al., 2013) . Linguistically, however, there are many similarities between an AMR and the dependency structure of a sentence. Both describe relations as holding between a head and its dependent, or between a parent and its child. AMR concepts and relations abstract away from actual word tokens, but there are regularities in their mappings. Content words generally be-come concepts while function words either become relations or get omitted if they do not contribute to the meaning of a sentence. This is illustrated in Figure 1 , where 'the' and 'to' in the dependency tree are omitted from the AMR and the preposition 'in' becomes a relation of type location. In AMR, reentrancy is also used to represent co-reference, but this only happens in some limited contexts. In Figure 1 , 'police' is both an argument of 'arrest' and 'want' as the result of a control structure. This suggests that it is possible to transform a dependency tree into an AMR with a limited number of actions and learn a model to determine which action to take given pairs of aligned dependency trees and AMRs as training data.", "cite_spans": [ { "start": 517, "end": 538, "text": "(Chiang et al., 2013)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1049, "end": 1057, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1301, "end": 1309, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This is the approach we adopt in the present work, and we present a transition-based framework in which we parse a sentence into an AMR by taking the dependency tree of that sentence as input and transforming it to an AMR representation via a series of actions. This means that a sentence is parsed into an AMR in two steps. In the first step the sentence is parsed into a dependency tree with a dependency parser, and in the second step the dependency tree is transformed into an AMR graph. One advantage of this approach is that the dependency parser does not have to be trained on the same data set as the dependency to AMR transducer. This allows us to use more accurate dependency parsers trained on data sets much larger than the AMR Annotation Corpus and have a more advantageous starting point. Our experiments show that this approach is very effective and yields an improvement of 5% absolute over the previously reported best result (Flanigan et al., 2014) in F-score, as measure by the Smatch metric .", "cite_spans": [ { "start": 943, "end": 966, "text": "(Flanigan et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is as follows. In \u00a72, we describe how we align the word tokens in a sentence with its AMR to create a span graph based on which we extract contextual information as features and perform actions. In \u00a73, we present our transition-based parsing algorithm and describe the actions used to transform the dependency tree of a sentence into an AMR. In \u00a74, we present the learning algorithm and the features we extract to train the transition model. In \u00a75, we present experimental results. \u00a76 describes related work, and we conclude in \u00a77.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unlike the dependency structure of a sentence where each word token is a node in the dependency tree and there is an inherent alignment between the word tokens in the sentence and the nodes in the dependency tree, AMR is an abstract representation where the word order of the corresponding sentence is not maintained. In addition, some words become abstract concepts or relations while other words are simply deleted because they do not contribute to meaning. The alignment between the word tokens and the concepts is non-trivial, but in order to learn the transition from a dependency tree to an AMR graph, we have to first establish the alignment between the word tokens in the sentence and the concepts in the AMR. We use the aligner that comes with JAMR (Flanigan et al., 2014) to produce this alignment. The JAMR aligner attempts to greedily align every concept or graph fragment in the AMR graph with a contiguous word token sequence in the sentence. We use a data structure called span graph to represent an AMR graph that is aligned with the word tokens in a sentence. For each sentence w = w 0 , w 1 , . . . , w n , where token w 0 is a special root symbol, a span graph is a directed, labeled graph G = (V, A), where V = {s i,j |i, j \u2208 (0, n) and j > i} is a set of nodes, and A \u2286 V \u00d7 V is a set of arcs. Each node s i,j of G corresponds to a continuous span (w i , . . . , w j\u22121 ) in sentence w and is indexed by the starting position i. Each node is assigned a concept label from a set L V of concept labels and each arc is assigned a relation label from a set L A of relation labels, respectively.", "cite_spans": [ { "start": 758, "end": 781, "text": "(Flanigan et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Representation", "sec_num": "2" }, { "text": "For example, given an AMR graph G AM R in Figure 2a, its span graph G can be represented as Figure 2b . In span graph G, node s 3,4 's sentence span is (want) and its concept label is want-01, which represents a single node want-01 in AMR. To simplify the alignment, when creating a span graph out of an AMR, we also collapse some AMR subgraphs in such a way that they can be deterministically restored to their original state for evaluation. For example, the four nodes in the AMR subgraph that correspond to span (Micheal, Karras) is collapsed into a single node s 6,8 in the span graph and assigned the concept label person+name, as shown in Figure 3 . So the concept label set that our model predicts consists of both those from the concepts in the original AMR graph and those as a result of collapsing the AMR subgraphs. Representing AMR graph this way allows us to formulate the AMR parsing problem as a joint learning problem where we can design a set of actions to simultaneously predict the concepts (nodes) and relations (arcs) in the AMR graph as well as the labels on them.", "cite_spans": [ { "start": 92, "end": 101, "text": "Figure 2b", "ref_id": null } ], "ref_spans": [ { "start": 42, "end": 48, "text": "Figure", "ref_id": null }, { "start": 645, "end": 653, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Graph Representation", "sec_num": "2" }, { "text": "3 Transition-based AMR Parsing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Representation", "sec_num": "2" }, { "text": "Similar to transition-based dependency parsing (Nivre, 2008) , we define a transition system for AMR parsing as a quadruple S = (S, T, s 0 , S t ), where", "cite_spans": [ { "start": 47, "end": 60, "text": "(Nivre, 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "\u2022 S is a set of parsing states (configurations).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "\u2022 T is a set of parsing actions (transitions), each of which is a function t : S \u2192 S. \u2022 s 0 is an initialization function, mapping each input sentence w and its dependency tree D to an initial state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "\u2022 S t \u2286 S is a set of terminal states.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "Each state (configuration) of our transition-based parser is a triple (\u03c3, \u03b2, G). \u03c3 is a buffer that stores indices of the nodes which have not been processed and we write \u03c3 = \u03c3 0 |\u03c3 to indicate that \u03c3 0 is the topmost element of \u03c3. \u03b2 is also a buffer", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "[\u03b2 0 , \u03b2 1 , . . . , \u03b2 j ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "and each element \u03b2 i of \u03b2 indicates the edge (\u03c3 0 , \u03b2 i ) which has not been processed in the partial graph. We also write \u03b2 = \u03b2 0 |\u03b2 to indicate the topmost element of \u03b2 is \u03b2 0 . We use span graph G to store the partial parses for the input sentence w. Note that unlike traditional transition-based syntactic parsers which store partial parses in the stack structure and build a tree or graph incrementally, here we use \u03c3 and \u03b2 buffers only to guide the parsing process (which node or edge to be processed next) and the actual tree-to-graph transformations are applied to G.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "When the parsing procedure starts, \u03c3 is initialized with a post-order traversal of the input dependency tree D with topmost element \u03c3 0 , \u03b2 is initialized with node \u03c3 0 's children or set to null if \u03c3 0 is a leaf node. G is initialized with all the nodes and edges of D. Initially, all the nodes of G have a span length of one and all the labels for nodes and edges are set to null. As the parsing procedure goes on, the parser will process all the nodes and their outgoing edges in dependency tree D in a bottom-up left-right manner, and at each state certain action will be applied to the current node or edge. The parsing process will terminate when both \u03c3 and \u03b2 are empty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "The most important part of the transition-based parser is the set of actions (transitions). As stated in (Sartorio et al., 2013) , the design space of possible actions is actually infinite since the set of parsing states is infinite. However, if the problem is amenable to transition-based parsing, we can design a finite set of actions by categorizing all the possible situations we run into in the parsing process. In \u00a75.2 we show this is the case here and our action set can account for almost all the transformations from dependency trees to AMR graphs.", "cite_spans": [ { "start": 105, "end": 128, "text": "(Sartorio et al., 2013)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "We define 8 types of actions for the actions set T , which is summarized in Table 1 . The action set could be divided into two categories based on conditions of buffer \u03b2. When \u03b2 is not empty, parsing decisions are made based on the edge erwise, only the current node \u03c3 0 is examined. Also, to simultaneously make decisions on the assignment of concept/relation label, we augment some of the actions with an extra parameter l r or l c . We define \u03b3 : V \u2192 L V as the concept labeling function for nodes and \u03b4 : A \u2192 L A as the relation labeling function for arcs. So \u03b4[(\u03c3 0 , \u03b2 0 ) \u2192 l r ] means assigning relation label l r to arc (\u03c3 0 , \u03b2 0 ). All the actions update buffer \u03c3, \u03b2 and apply some transformation G \u21d2 G to the partial graph. The 8 actions are described below.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "(\u03c3 0 , \u03b2 0 ); oth- Action Current state \u21d2 Result state Assign labels Precondition NEXT EDGE-l r (\u03c3 0 |\u03c3 , \u03b2 0 |\u03b2 , G) \u21d2 (\u03c3 0 |\u03c3 , \u03b2 , G ) \u03b4[(\u03c3 0 , \u03b2 0 ) \u2192 l r ] \u03b2 is not empty SWAP-l r (\u03c3 0 |\u03c3 , \u03b2 0 |\u03b2 , G) \u21d2 (\u03c3 0 |\u03b2 0 |\u03c3 , \u03b2 , G ) \u03b4[(\u03b2 0 , \u03c3 0 ) \u2192 l r ] REATTACH k -l r (\u03c3 0 |\u03c3 , \u03b2 0 |\u03b2 , G) \u21d2 (\u03c3 0 |\u03c3 , \u03b2 , G ) \u03b4[(k, \u03b2 0 ) \u2192 l r ] REPLACE HEAD (\u03c3 0 |\u03c3 , \u03b2 0 |\u03b2 , G) \u21d2 (\u03b2 0 |\u03c3 , \u03b2 = CH(\u03b2 0 , G ), G ) NONE REENTRANCE k -l r (\u03c3 0 |\u03c3 , \u03b2 0 |\u03b2 , G) \u21d2 (\u03c3 0 |\u03c3 , \u03b2 0 |\u03b2 , G ) \u03b4[(k, \u03b2 0 ) \u2192 l r ] MERGE (\u03c3 0 |\u03c3 , \u03b2 0 |\u03b2 , G) \u21d2 (\u03c3|\u03c3 , \u03b2 , G ) NONE NEXT NODE-l c (\u03c3 0 |\u03c3 1 |\u03c3 , [], G) \u21d2 (\u03c3 1 |\u03c3 , \u03b2 = CH(\u03c3 1 , G ), G ) \u03b3[\u03c3 0 \u2192 l c ] \u03b2 is empty DELETE NODE (\u03c3 0 |\u03c3 1 |\u03c3 , [], G) \u21d2 (\u03c3 1 |\u03c3 , \u03b2 = CH(\u03c3 1 , G ), G ) NONE", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "\u2022 NEXT-EDGE-l r (ned). This action assigns a relation label l r to the current edge (\u03c3 0 , \u03b2 0 ) and makes no further modification to the partial graph. Then it pops out the top element of buffer \u03b2 so that the parser moves one step forward to examine the next edge if it exists. \u2022 SWAP-l r (sw). This action reverses the dependency relation between node \u03c3 0 and \u03b2 0 and then makes node \u03b2 0 as new head of the subgraph. Also it assigns relation label l r to the arc (\u03b2 0 , \u03c3 0 ). Then it pops out \u03b2 0 and inserts it into \u03c3 right after \u03c3 0 for future revisiting. This action is to resolve the difference in the choice of head between the dependency tree and the AMR graph. Figure 4 gives an example of ap-plying SWAP-op1 action for arc (Korea, and) in the dependency tree of sentence \"South Korea and Israel oppose ...\". \u2022 REATTACH k -l r (reat). This action removes the current arc (\u03c3 0 , \u03b2 0 ) and reattaches node \u03b2 0 to some node k in the partial graph. It also assigns a relation label l r to the newly created arc (k, \u03b2 0 ) and advances one step by popping out \u03b2 0 . Theoretically, the choice of node k could be any node in the partial graph under the constraint that arc (k, \u03b2 0 ) doesn't produce a self-looping cycle. The intuition behind this action is that after swapping a head and its dependent, some of the dependents of the old head should be reattached to the new head. Figure 5 shows an example where node Israel needs to be reattached to node and after a headdependent swap. \u2022 REPLACE-HEAD (rph). This action removes node \u03c3 0 , replaces it with node \u03b2 0 . Node \u03b2 0 also inherits all the incoming and outgoing arcs of \u03c3 0 . Then it pops out \u03b2 0 and inserts it into the top position of buffer \u03c3. \u03b2 is re-initialized with all the children of \u03b2 0 in the transformed graph G . This action targets nodes in the dependency tree that do not correspond to concepts in AMR graph and become a relation instead. An example is provided in Figure 6 , where node in, a preposition, is replaced with node Singapore, and in a subsequent NEXT-EDGE action that examines arc (live, Singapore), the arc is labeled location. \u2022 REENTRANCE k -l r (reen). This is the action that transforms a tree into a graph. It keeps the current arc unchanged, and links node \u03b2 0 to every possible node k in the partial graph that can also be its parent. Similar to the REATTACH action, the newly created arc (k, \u03b2 0 ) should not produce a self-looping cycle and parameter k is bounded by the sentence length. In practice, we seek to constrain this action as we will explain in \u00a73.2. Intuitively, this action can be used to model co-reference and an example is given in Figure 7 . and \u03b2 0 into one node\u03c3 which covers multiple words in the sentence. The new node inherits all the incoming and outgoing arcs of both nodes \u03c3 0 and \u03b2 0 . The MERGE action is intended to produce nodes that cover a continuous span in the sentence that corresponds to a single name entity in AMR graph. see Figure 8 for an example. When \u03b2 is empty, which means all the outgoing arcs of node \u03c3 0 have been processed or \u03c3 0 has no outgoing arcs, the following two actions can be applied:", "cite_spans": [ { "start": 734, "end": 746, "text": "(Korea, and)", "ref_id": null } ], "ref_spans": [ { "start": 671, "end": 679, "text": "Figure 4", "ref_id": null }, { "start": 1382, "end": 1390, "text": "Figure 5", "ref_id": null }, { "start": 1940, "end": 1948, "text": "Figure 6", "ref_id": null }, { "start": 2646, "end": 2654, "text": "Figure 7", "ref_id": null }, { "start": 2960, "end": 2968, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "\u2022 NEXT-NODE-l c (nnd). This action first assigns a concept label l c to node \u03c3 0 . Then it advances the parsing procedure by popping out the top element \u03c3 0 of buffer \u03c3 and re-initializes buffer \u03b2 with all the children of node \u03c3 1 which is the current top element of \u03c3. Since this action will be applied to every node which is kept in the final parsed graph, concept labeling could be done simultaneously through this action. \u2022 DELETE-NODE (dnd). This action simply deletes the node \u03c3 0 and removes all the arcs associated with it. This action models the fact that most function words are stripped off in the AMR of a sentence. Note that this action only targets function words that are leaves in the dependency tree, and we constrain this action by only deleting nodes which do not have outgoing arcs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "When parsing a sentence of length n (excluding the special root symbol w 0 ), its corresponding dependency tree will have n nodes and n \u2212 1 arcs. For projective transition-based dependency parsing, the parser needs to take exactly 2n \u2212 1 steps or actions. So the complexity is O(n). However, for our tree-to-graph parser defined above, the actions needed are no longer linearly bounded by the sentence length. Suppose there are no REATTACH, REENTRANCE and SWAP actions during the parsing process, the algorithm will traverse every node and edge in the dependency tree, which results in 2n actions. However, REATTACH and REEN-TRANCE actions would add extra edges that need to be re-processed and the SWAP action adds both nodes and edges that need to be re-visited. Since the space of all possible extra edges is (n \u2212 2) 2 and revisiting them only adds more actions linearly, the total asymptotic runtime complexity of our algorithm is O(n 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "In practice, however, the number of applications of the REATTACH action is much less than the worst case scenario due to the similarities between the dependency tree and the AMR graph of a sentence. Also, nodes with reentrancies in AMR only account for a small fraction of all the nodes, thus making the REENTRANCE action occur at constant times. These allow the tree-to-graph parser to parse a sentence in nearly linear time in practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition System", "sec_num": "3.1" }, { "text": "Algorithm 1 Parsing algorithm Input: sentence w = w 0 . . . w n and its dependency tree D w Output:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy Parsing Algorithm", "sec_num": "3.2" }, { "text": "parsed graph G p 1: s \u2190 s 0 (D w , w) 2: while s / \u2208 S t do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy Parsing Algorithm", "sec_num": "3.2" }, { "text": "T \u2190 all possible actions according to s 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy Parsing Algorithm", "sec_num": "3.2" }, { "text": "bestT \u2190 arg max t\u2208T score(t, c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy Parsing Algorithm", "sec_num": "3.2" }, { "text": "5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy Parsing Algorithm", "sec_num": "3.2" }, { "text": "s \u2190 apply bestT to s 6: end while 7: return G p Our parsing algorithm is similar to the parser in (Sartorio et al., 2013) . At each parsing state s \u2208 S, the algorithm greedily chooses the parsing action t \u2208 T that maximizes the score function score(). The score function is a linear model defined over parsing action t and parsing state s.", "cite_spans": [ { "start": 98, "end": 121, "text": "(Sartorio et al., 2013)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Greedy Parsing Algorithm", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score(t, s) = \u03c9 \u2022 \u03c6(t, s)", "eq_num": "(1)" } ], "section": "Greedy Parsing Algorithm", "sec_num": "3.2" }, { "text": "where \u03c9 is the weight vector and \u03c6 is a function that extracts the feature vector representation for one possible state-action pair t, s . First, the algorithm initializes the state s with the sentence w and its dependency tree D w . At each iteration, it gets all the possible actions for current state s (line 3). Then, it chooses the action with the highest score given by function score() and applies it to s (line 4-5). When the current state reaches a terminal state, the parser stops and returns the parsed graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy Parsing Algorithm", "sec_num": "3.2" }, { "text": "As pointed out in (Bohnet and Nivre, 2012) , constraints can be added to limit the number of possible actions to be evaluated at line 3. There could be formal constraints on states such as the constraint that the SWAP action should not be applied twice to the same pair of nodes. We could also apply soft constraints to filter out unlikely concept labels, relation labels and candidate nodes k for REATTACH and REENTRANCE. In our parser, we enforce the constraint that NEXT-NODE-l c can only choose from concept labels that co-occur with the current node's lemma in the training data. We also empirically set the constraint that REATTACH k could only choose k among \u03c3 0 's grandparents and great grandparents. Additionally, REENTRANCE k could only choose k among its siblings. These constraints greatly reduce the search space, thus speeding up the parser.", "cite_spans": [ { "start": 18, "end": 42, "text": "(Bohnet and Nivre, 2012)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Greedy Parsing Algorithm", "sec_num": "3.2" }, { "text": "As stated in section 3.2, the parameter of our model is weight vector \u03c9 in the score function. To train the weight vector, we employ the averaged perceptron learning algorithm (Collins, 2002) .", "cite_spans": [ { "start": 176, "end": 191, "text": "(Collins, 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "4.1" }, { "text": "Input: sentence w = w 0 . . . w n , D w , G w Output: \u03c9 1: s \u2190 s 0 (D w , w) 2: while s / \u2208 S t do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Learning algorithm", "sec_num": null }, { "text": "T \u2190 all possible actions according to s 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Learning algorithm", "sec_num": null }, { "text": "bestT \u2190 arg max t\u2208T score(t, s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Learning algorithm", "sec_num": null }, { "text": "5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Learning algorithm", "sec_num": null }, { "text": "goldT \u2190 oracle(s, G w ) 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Learning algorithm", "sec_num": null }, { "text": "if bestT = goldT then 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Learning algorithm", "sec_num": null }, { "text": "\u03c9 \u2190 \u03c9 \u2212 \u03c6(bestT, s) + \u03c6(goldT, s) 8: end if 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Learning algorithm", "sec_num": null }, { "text": "s \u2190 apply goldT to s 10: end while For each sentence w and its corresponding AMR annotation G AM R in the training corpus, we could get the dependency tree D w of w with a dependency parser. Then we represent G AM R as span graph G w , which serves as our learning target. The learning algorithm takes the training instances (w, D w , G w ), parses D w according to Algorithm 1, and get the best action using current weight vector \u03c9. The gold action for current state s is given by consulting span graph G w , which we formulate as a function oracle() (line 5). If the gold action is equal to the best action we get from the parser, then the best action is applied to current state; otherwise, we update the weight vector (line 6-7) and continue the parsing procedure by applying the gold action.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Learning algorithm", "sec_num": null }, { "text": "Single node features Table 2 : Features used in our parser.\u03c3 0 ,\u03b2 0 ,k,\u03c3 0p represents elements in feature context of nodes \u03c3 0 , \u03b2 0 , k, \u03c3 0p , separately. Each atomic feature is represented as follows: w -word; lem -lemma; nename entity; t -POS-tag; dl -dependency label; len -length of the node's span.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Feature Extraction", "sec_num": "4.2" }, { "text": "\u03c3 0 .w,\u03c3 0 .lem,\u03c3 0 .ne,\u03c3 0 .t,\u03c3 0 .dl,\u03c3 0 .len \u03b2 0 .w,\u03b2 0 .lem,\u03b2 0 .ne,\u03b2 0 .t,\u03b2 0 .dl,\u03b2 0 .len k.w,k.lem,k.ne,k.t,k.dl,k.len \u03c3 0p .w,\u03c3 0p .lem,\u03c3 0p .ne,\u03c3 0p .t,\u03c3 0p .dl Node pair features \u03c3 0 .lem +\u03b2 0 .t,\u03c3 0 .lem +\u03b2 0 .dl \u03c3 0 .t +\u03b2 0 .lem,\u03c3 0 .dl +\u03b2 0 .lem \u03c3 0 .ne +\u03b2 0 .ne,k.ne +\u03b2 0 .n\u0113 k.t +\u03b2 0 .lem,k.dl +\u03b2 0 .lem Path features \u03c3 0 .lem +\u03b2 0 .lem + path \u03c3 0 ,\u03b2 0 k.lem +\u03b2 0 .lem + path k,\u03b2 0 Distance features dist \u03c3 0 ,\u03b2 0 dist k,\u03b2 0 dist \u03c3 0 ,\u03b2 0 + path \u03c3 0 ,\u03b2 0 dist \u03c3 0 ,\u03b2 0 + path k,\u03b2 0 Action specific features \u03b2 0 .lem +\u03b2 0 .nswp \u03b2 0 .reph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction", "sec_num": "4.2" }, { "text": "For transition-based dependency parsers, the feature context for a parsing state is represented by the neighboring elements of a word token in the stack containing the partial parse or the buffer containing unprocessed word tokens. In contrast, in our treeto graph parser, as already stated, buffers \u03c3 and \u03b2 only specify which arc or node is to be examined next. The feature context associated with current arc or node is mainly extracted from the partial graph G. As a result, the feature context is different for the different types of actions, a property that makes our parser very different from a standard transitionbased dependency parser. For example, when evaluating action SWAP we may be interested in features about individual nodes \u03c3 0 and \u03b2 0 as well as features involving the arc (\u03c3 0 , \u03b2 0 ). In contrast, when evaluating action REATTACH k , we want to extract not only features involving \u03c3 0 and \u03b2 0 , but also information about the reattached node k. To address this problem, we define the feature context as \u03c3 0 ,\u03b2 0 ,k,\u03c3 0p , where each elementx consists of its atomic features of node x and \u03c3 0p denotes the immediate parent of node \u03c3 0 . For elements in feature context that are not applicable to the candidate action, we just set the element to NONE and only extract features which are valid for the candidate action. The list of features we use is shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 1379, "end": 1386, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Feature Extraction", "sec_num": "4.2" }, { "text": "Single node features are atomic features concerning all the possible nodes involved in each candidate state-action pair. We also include path features and distance features as described in (Flanigan et al., 2014) . A path feature path x,y is represented as the dependency labels and parts of speech on the path between nodes x and y in the partial graph. Here we combine it with the lemma of the starting and ending nodes. Distance feature dist x,y is the number of tokens between two node x, y's spans in the sentence. Action-specific features record the history of actions applied to a given node. For example,\u03b2 0 .nswp records how many times node \u03b2 0 has been swapped up. We combine this feature with the lemma of node \u03b2 0 to prevent the parser from swapping a node too many times.\u03b2 0 .reph records the word feature of nodes that have been replaced with node \u03b2 0 . This feature is helpful in predicting relation labels. As we have discussed above, in an AMR graph, some function words are deleted as nodes but they are crucial in determining the relation label between its child and parent.", "cite_spans": [ { "start": 189, "end": 212, "text": "(Flanigan et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction", "sec_num": "4.2" }, { "text": "Our experiments are conducted on the newswire section of AMR Annotation Corpus (LDC2013E117) (Banarescu et al., 2013) .", "cite_spans": [ { "start": 93, "end": 117, "text": "(Banarescu et al., 2013)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setting", "sec_num": "5.1" }, { "text": "We follow Flanigan et al. (2014) in setting up the train/development/test splits 1 for easy comparison: 4.0k sentences with document years 1995-2006 as the training set; 2.1k sentences with document year 2007 as the development set; 2.1k sentences with document year 2008 as the test set, and only using AMRs that are tagged ::preferred. Each sentence w is preprocessed with the Stanford CoreNLP toolkit (Manning et al., 2014) to get partof-speech tags, name entity information, and basic dependencies. We have verified that there is no overlap between the training data for the Stanford CoreNLP toolkit 2 and the AMR Annotation Corpus. We evaluate our parser with the Smatch tool , which seeks to maximize the semantic overlap between two AMR annotations.", "cite_spans": [ { "start": 10, "end": 32, "text": "Flanigan et al. (2014)", "ref_id": "BIBREF6" }, { "start": 404, "end": 426, "text": "(Manning et al., 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setting", "sec_num": "5.1" }, { "text": "One question about the transition system we presented above is whether the action set defined here can cover all the situations involving a dependencyto-AMR transformation. Although a formal theoretical proof is beyond the scope of this paper, we can empirically verify that the action set works well in practice. To validate the actions, we first run the oracle() function for each sentence w and its dependency tree D w to get the \"pseudo-gold\" G w . Then we compare G w with the gold-standard AMR graph represented as span graph G w to see how similar they are. On the training data we got an overall 99% F-score for all G w , G w pairs, which indicates that our action set is capable of transforming each sentence w and its dependency tree D w into its goldstandard AMR graph through a sequence of actions. Table 3 gives the precision, recall and F-score of our parser given by Smatch on the test set. Our parser achieves an F-score of 63% (Row 3) and the result is 5% better than the first published result reported in (Flanigan et al., 2014) with the same training and test set (Row 2). We also conducted experiments on the test set by replacing the parsed graph with gold 1 A script to create the train/dev/test partitions is available at the following URL: http://goo.gl/vA32iI", "cite_spans": [ { "start": 1024, "end": 1047, "text": "(Flanigan et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 811, "end": 818, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Action Set Validation", "sec_num": "5.2" }, { "text": "2 Specifically we used CoreNLP toolkit v3.3.1 and parser model wsjPCFG.ser.gz trained on the WSJ treebank sections 02-21. relation labels or/and gold concept labels. We can see in Table 3 that when provided with gold concept and relation labels as input, the parsing accuracy improves around 8% F-score (Row 6). Rows 4 and 5 present results when the parser is provided with just the gold relation labels (Row 4) or gold concept labels (Row 5), and the results are expectedly lower than if both gold concept and relation labels are provided as input. Table 3 : Results on the test set. Here, l gc -gold concept label; l gr -gold relation label; l grc -gold concept label and gold relation label.", "cite_spans": [], "ref_spans": [ { "start": 180, "end": 187, "text": "Table 3", "ref_id": null }, { "start": 550, "end": 557, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.3" }, { "text": "Figure 9: Confusion Matrix for actions t g , t . Vertical direction goes over the correct action type, and horizontal direction goes over the parsed action type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "Wrong alignments between the word tokens in the sentence and the concepts in the AMR graph account for a significant proportion of our AMR parsing errors, but here we focus on errors in the transition from the dependency tree to the AMR graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "Since in our parsing model, the parsing process has been decomposed into a sequence of actions applied to the input dependency tree, we can use the oracle() function during parsing to give us the cor-rect action t g to take for a given state s. A comparison between t g and the best action t actually taken by our parser will give us a sense about how accurately each type of action is applied. When we compare the actions, we focus on the structural aspect of AMR parsing and only take into account the eight action types, ignoring the concept and edge labels attached to them. For example, NEXT-EDGE-ARG0 and NEXT-EDGE-ARG1 would be considered to be the same action and counted as a match when we compute the errors even though the labels attached to them are different. Figure 9 shows the confusion matrix that presents a comparison between the parser-predicted actions and the correct actions given by oracle() function. It shows that the NEXT-EDGE (ned), NEXT-NODE (nnd), and DELETENODE (dnd) actions account for a large proportion of the actions. These actions are also more accurately applied. As expected, the parser makes more mistakes involving the REATTACH (reat), REENTRANCE (reen) and SWAP (sw) actions. The REATTACH action is often used to correct PP-attachment errors made by the dependency parser or readjust the structure resulting from the SWAP action, and it is hard to learn given the relatively small AMR training set. The SWAP action is often tied to coordination structures in which the head in the dependency structure and the AMR graph diverges. In the Stanford dependency representation which is the input to our parser, the head of a coordination structure is one of the conjuncts. For AMR, the head is an abstract concept signaled by one of the coordinating conjunctions. This also turns out to be one of the more difficult actions to learn. We expect, however, as the AMR Annotation Corpus grows bigger, the parsing model trained on a larger training set will learn these actions better.", "cite_spans": [], "ref_spans": [ { "start": 773, "end": 781, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "Our work is directly comparable to JAMR (Flanigan et al., 2014) , the first published AMR parser. JAMR performs AMR parsing in two stages: concept identification and relation identification. They treat concept identification as a sequence labeling task and utilize a semi-Markov model to map spans of words in a sentence to concept graph fragments. For rela-tion identification, they adopt the graph-based techniques for non-projective dependency parsing. Instead of finding maximum-scoring trees over words, they propose an algorithm to find the maximum spanning connected subgraph (MSCG) over concept fragments obtained from the first stage. In contrast, we adopt a transition-based approach that finds its root in transition-based dependency parsing (Yamada and Matsumoto, 2003; Nivre, 2003; Sagae and Tsujii, 2008) , where a series of actions are performed to transform a sentence to a dependency tree. As should be clear from our description, however, the actions in our parser are very different in nature from the actions used in transition-based dependency parsing.", "cite_spans": [ { "start": 40, "end": 63, "text": "(Flanigan et al., 2014)", "ref_id": "BIBREF6" }, { "start": 753, "end": 781, "text": "(Yamada and Matsumoto, 2003;", "ref_id": "BIBREF17" }, { "start": 782, "end": 794, "text": "Nivre, 2003;", "ref_id": "BIBREF9" }, { "start": 795, "end": 818, "text": "Sagae and Tsujii, 2008)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "There is also another line of research that attempts to design graph grammars such as hyperedge replacement grammar (HRG) (Chiang et al., 2013) and efficient graph-based algorithms for AMR parsing. Existing work along this line is still theoretical in nature and no empirical results have been reported yet.", "cite_spans": [ { "start": 122, "end": 143, "text": "(Chiang et al., 2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We presented a novel transition-based parsing algorithm that takes the dependency tree of a sentence as input and transforms it into an Abstract Meaning Representation graph through a sequence of actions. We show that our approach is linguistically intuitive and our experimental results also show that our parser outperformed the previous best reported results by a significant margin. In future work we plan to continue to perfect our parser via improved learning and decoding techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" } ], "back_matter": [ { "text": "We want to thank the anonymous reviewers for their suggestions. We also want to thank Jeffrey Flanigan, Xiaochang Peng, Adam Lopez and Giorgio Satta for discussion about ideas related to this work during the Fred Jelinek Memorial Workshop in Prague in 2014. This work was partially supported by the National Science Foundation via Grant No.0910532 entitled Richer Representations for Machine Translation. All views expressed in this paper are those of the authors and do not necessarily represent the view of the National Science Foundation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Abstract Meaning Representation for Sembanking", "authors": [ { "first": "Laura", "middle": [], "last": "Banarescu", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Bonial", "suffix": "" }, { "first": "Shu", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Madalina", "middle": [], "last": "Georgescu", "suffix": "" }, { "first": "Kira", "middle": [], "last": "Griffitt", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", "volume": "", "issue": "", "pages": "178--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider, 2013. Abstract Meaning Representation for Sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Dis- course, pages 178-186. Association for Computa- tional Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1455--1465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet and Joakim Nivre. 2012. A transition- based system for joint part-of-speech tagging and la- beled non-projective dependency parsing. In Proceed- ings of the 2012 Joint Conference on Empirical Meth- ods in Natural Language Processing and Computa- tional Natural Language Learning, pages 1455-1465. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Smatch: an evaluation metric for semantic feature structures", "authors": [ { "first": "Shu", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "748--752", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 748-752. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Parsing graphs with hyperedge replacement grammars", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Bevan", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "924--932", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with hyperedge replacement gram- mars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 924-932, Sofia, Bulgaria, August. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Incremental parsing with the perceptron algorithm", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceed- ings of the 42nd Annual Meeting on Association for Computational Linguistics, page 111. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing", "volume": "10", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natu- ral language processing-Volume 10, pages 1-8. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A discriminative graph-based parser for the abstract meaning representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Flanigan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1426--1436", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning represen- tation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1426-1436, Baltimore, Maryland, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "423--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1, pages 423-430. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 55-60.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An efficient algorithm for projective dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 8th International Workshop on Parsing Technologies (IWPT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2003. An efficient algorithm for pro- jective dependency parsing. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT). Citeseer.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Incremental non-projective dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2007, "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2007. Incremental non-projective de- pendency parsing. In Human Language Technologies 2007: The Conference of the North American Chap- ter of the Association for Computational Linguistics;", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Proceedings of the Main Conference", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "396--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Proceedings of the Main Conference, pages 396-403. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Algorithms for deterministic incremental dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "4", "pages": "513--553", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2008. Algorithms for deterministic incre- mental dependency parsing. Computational Linguis- tics, 34(4):513-553.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Non-projective dependency parsing in expected linear time", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "1", "issue": "", "pages": "351--359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 351-359. Association for Computational Lin- guistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Shift-reduce dependency dag parsing", "authors": [ { "first": "Kenji", "middle": [], "last": "Sagae", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "753--760", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Sagae and Jun'ichi Tsujii. 2008. Shift-reduce de- pendency dag parsing. In Proceedings of the 22nd In- ternational Conference on Computational Linguistics- Volume 1, pages 753-760. Association for Computa- tional Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A transition-based dependency parser using a dynamic parsing strategy", "authors": [ { "first": "Francesco", "middle": [], "last": "Sartorio", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francesco Sartorio, Giorgio Satta, and Joakim Nivre. 2013. A transition-based dependency parser using a dynamic parsing strategy. In Proceedings of the 51st", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "135--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 135-144. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Statistical dependency analysis with support vector machines", "authors": [ { "first": "Hiroyasu", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proceedings of IWPT", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT, volume 3.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Dependency tree and AMR graph for the sentence, \"The police want to arrest Micheal Karras in Singapore.\"", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "AMR graph and its span graph for the sentence, \"The police want to arrest Micheal Karras.\"", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": "Figure 3: Collapsed nodes", "num": null, "type_str": "figure" }, "FIGREF4": { "uris": null, "text": "Figure 4: SWAP action", "num": null, "type_str": "figure" }, "FIGREF5": { "uris": null, "text": "Figure 5: REATTACH action", "num": null, "type_str": "figure" }, "FIGREF6": { "uris": null, "text": "Figure 6: REPLACE-HEAD action", "num": null, "type_str": "figure" }, "FIGREF7": { "uris": null, "text": "MERGE (mrg). This action merges nodes \u03c3 0", "num": null, "type_str": "figure" }, "FIGREF8": { "uris": null, "text": "Figure 8: MERGE action", "num": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "text": "Transitions designed in our parser. CH(x, y) means getting all node x's children in graph y.", "content": "", "num": null, "html": null } } } }