{ "paper_id": "N15-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:33:23.752382Z" }, "title": "Transition-Based Syntactic Linearization", "authors": [ { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "yjliu@ir.hit.edu.cn" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "yuezhang@sutd.edu.sg" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "bqin@ir.hit.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Syntactic linearization algorithms take a bag of input words and a set of optional constraints, and construct an output sentence and its syntactic derivation simultaneously. The search problem is NP-hard, and the current best results are achieved by bottom-up bestfirst search. One drawback of the method is low efficiency; and there is no theoretical guarantee that a full sentence can be found within bounded time. We propose an alternative algorithm that constructs output structures from left to right using beam-search. The algorithm is based on incremental parsing algorithms. We extend the transition system so that word ordering is performed in addition to syntactic parsing, resulting in a linearization system that runs in guaranteed quadratic time. In standard evaluations, our system runs an order of magnitude faster than a state-of-the-art baseline using best-first search, with improved accuracies.", "pdf_parse": { "paper_id": "N15-1012", "_pdf_hash": "", "abstract": [ { "text": "Syntactic linearization algorithms take a bag of input words and a set of optional constraints, and construct an output sentence and its syntactic derivation simultaneously. The search problem is NP-hard, and the current best results are achieved by bottom-up bestfirst search. One drawback of the method is low efficiency; and there is no theoretical guarantee that a full sentence can be found within bounded time. We propose an alternative algorithm that constructs output structures from left to right using beam-search. The algorithm is based on incremental parsing algorithms. We extend the transition system so that word ordering is performed in addition to syntactic parsing, resulting in a linearization system that runs in guaranteed quadratic time. In standard evaluations, our system runs an order of magnitude faster than a state-of-the-art baseline using best-first search, with improved accuracies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Linearization is the task of ordering a bag of words into a grammatical and fluent sentence. Syntaxbased linearization algorithms generate a sentence along with its syntactic structure. Depending on how much syntactic information is available as inputs, recent work on syntactic linearization can be classified into free word ordering (Wan et al., 2009; Zhang et al., 2012; de Gispert et al., 2014) , which orders a bag of words without syntactic constraints, full tree linearization (He et al., 2009; Bohnet et al., 2010; Song et al., 2014) , which orders a bag of words Induction Rules: given a full-spanning syntactic tree, and partial tree linearization (Zhang, 2013) , which orders a bag of words given some syntactic relations between them as partial constraints.", "cite_spans": [ { "start": 335, "end": 353, "text": "(Wan et al., 2009;", "ref_id": "BIBREF15" }, { "start": 354, "end": 373, "text": "Zhang et al., 2012;", "ref_id": "BIBREF23" }, { "start": 374, "end": 398, "text": "de Gispert et al., 2014)", "ref_id": "BIBREF4" }, { "start": 484, "end": 501, "text": "(He et al., 2009;", "ref_id": "BIBREF5" }, { "start": 502, "end": 522, "text": "Bohnet et al., 2010;", "ref_id": "BIBREF1" }, { "start": 523, "end": 541, "text": "Song et al., 2014)", "ref_id": "BIBREF14" }, { "start": 658, "end": 671, "text": "(Zhang, 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "SHIFT (\u03c3, [i|\u03b2], A) ([\u03c3| i], \u03b2, A) LEFTARC ([\u03c3|j i], \u03b2, A) ([\u03c3|i], \u03b2, A \u222a {j \u2190 i}) RIGHTARC ([\u03c3|j i], \u03b2, A) ([\u03c3|j], \u03b2, A \u222a {j \u2192 i})", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The search space for syntactic linearization is huge. Even with a full syntax tree being available as constraints, permutation of nodes on each level is an NP-hard problem. As a result, heuristic search has been adopted by most previous work, and the best results have been achieved by a time-constrained best-first search framework (White, 2004a; White and Rajkumar, 2009; Zhang and Clark, 2011b; Song et al., 2014) . Though empirically highly accurate, one drawback of this approach is that there is no asymptotic upper bound on the time complexity of finding the first full sentence. As a result, it can take 5-10 seconds to process a sentence, and sometimes fail to yield a full sentence at timeout. This issue is more severe for larger bags of words, and makes the algorithms practically less useful.", "cite_spans": [ { "start": 333, "end": 347, "text": "(White, 2004a;", "ref_id": "BIBREF17" }, { "start": 348, "end": 373, "text": "White and Rajkumar, 2009;", "ref_id": "BIBREF16" }, { "start": 374, "end": 397, "text": "Zhang and Clark, 2011b;", "ref_id": "BIBREF21" }, { "start": 398, "end": 416, "text": "Song et al., 2014)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We study the effect of an alternative learning and search framework for the linearization prob- lem, which has a theoretical upper bound on the time complexity, and always yields a full sentence in quadratic time. Our method is inspired by the connection between syntactic linearization and syntactic parsing: both build a syntactic tree over a sentence, with the former performing word ordering in addition to derivation construction. As a result, syntactic linearization can be treated as a generalized form of parsing, for which there is no input word order, and therefore extensions to parsing algorithms can be used to perform linearization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For syntactic parsing, the algorithm of Zhang and Nivre (2011) gives competitive accuracies under linear complexity. Compared with parsers that use dynamic programming (McDonald and Pereira, 2006; Koo and Collins, 2010) , the efficient beam-search system is more suitable for the NP-hard linearization task. We extend the parser of Zhang and Nivre (2011) , so that word ordering is performed in addition to syntactic tree construction. Experimental results show that the transition-based linearization system runs an order of magnitude faster than a state-ofthe-art best-first baseline, with improved accuracies in standard evaluation. Our linearizer is publicly available under GPL at http://sourceforge. net/projects/zgen/.", "cite_spans": [ { "start": 40, "end": 62, "text": "Zhang and Nivre (2011)", "ref_id": "BIBREF22" }, { "start": 168, "end": 196, "text": "(McDonald and Pereira, 2006;", "ref_id": "BIBREF8" }, { "start": 197, "end": 219, "text": "Koo and Collins, 2010)", "ref_id": "BIBREF6" }, { "start": 332, "end": 354, "text": "Zhang and Nivre (2011)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task of dependency parsing is to find a dependency tree given an input sentence. Figure 2 shows an example dependency tree, which consists of dependency arcs that represent syntactic relations between pairs of words. A transition-based dependency parsing algorithm (Nivre, 2008) can be formalized as a transition system, S = (C, T, c s , C t ), where C is the set of states, T is a set of transition actions, c s is the initial state and C t is a set of terminal states. The parsing process is modeled as an application of a sequence of actions, transducing the initial state into a final state, while constructing de- Table 1 : arc-standard transition action sequence for parsing the sentence in Figure 2 .", "cite_spans": [ { "start": 269, "end": 282, "text": "(Nivre, 2008)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 85, "end": 93, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 623, "end": 630, "text": "Table 1", "ref_id": null }, { "start": 701, "end": 709, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Transition-Based Parsing", "sec_num": "2" }, { "text": "A \u222a {4 \u2192 5} 7 RIGHTARC [1 2 3] [6] A \u222a {3 \u2192 4} 8 RIGHTARC [1 2] [6] A \u222a {2 \u2192 3} 9 SHIFT [1 2 6] [] 10 RIGHTARC [1 2] [] A \u222a {2 \u2192 6} 11 LEFTARC [2] [] A \u222a {1 \u2190 2}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Parsing", "sec_num": "2" }, { "text": "pendency arcs. Each state in the transition system can be formalized as a tuple (\u03c3, \u03b2, A), where \u03c3 is a stack that maintains a partial derivation, \u03b2 is a buffer of incoming input words and A is the set of dependency relations that have been built.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Parsing", "sec_num": "2" }, { "text": "Our work is based on the arc-standard algorithm (Nivre, 2008 ). The deduction system of the arcstandard algorithm is shown in Figure 1 \u2022 LEFTARC builds an arc {j \u2190 i} and pops j off the stack. \u2022 RIGHTARC builds an arc {j \u2192 i} and pops i off the stack. \u2022 SHIFT removes the front word k from the buffer \u03b2, and shifts it onto the stack.", "cite_spans": [ { "start": 48, "end": 60, "text": "(Nivre, 2008", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 126, "end": 134, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transition-Based Parsing", "sec_num": "2" }, { "text": "In the notations above, i, j and k are word indices of an input sentence. The arc-standard system assumes that each input word has been assigned a part-ofspeech (POS) tag.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Parsing", "sec_num": "2" }, { "text": "The sentence in Figure 2 can be parsed by the transition sequence shown in Table 1 . Given an input sentence of n words, the algorithm takes 2n transitions to construct an output, because each word needs to be shifted onto the stack once and popped off once before parsing finishes, and all the transition actions are either shifting or popping actions. ", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 24, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 75, "end": 82, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Transition-Based Parsing", "sec_num": "2" }, { "text": "SHIFT-i-POS (\u03c3, \u03c1, A) ([\u03c3|i], \u03c1 \u2212 {i}, A) LEFTARC ([\u03c3|j i], \u03c1, A) ([\u03c3|i], \u03c1, A \u222a {j \u2190 i}) RIGHTARC ([\u03c3|j i], \u03c1, A) ([\u03c3|j], \u03c1, A \u222a {j \u2192 i})", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Parsing", "sec_num": "2" }, { "text": "The main difference between linearization and dependency parsing is that the input words are unordered for linearization, which results in an unordered buffer \u03c1. At a certain state s = (\u03c3, \u03c1, A), any word in the buffer \u03c1 can be shifted onto the stack. In addition, unlike a parser, the vanilla linearization task does not assume that input words are assigned POS. To extend the arc-standard algorithm for linearization, we incorporate word and POS into the SHIFT operation, transforming the arc-standard SHIFT operation to SHIFT-Word-POS, which selects the word Word from the buffer \u03c1, tags it with POS and shifts it onto the stack. Since the order of words in an output sentence equals to the order in which they are shifted onto the stack, word ordering is performed along with the parsing process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Linearization", "sec_num": "3" }, { "text": "Under such extension, the sentence in Figure 2 can be generated by the transition sequence (SHIFT-Dr. Talcott-NP, SHIFT-led-VBD, SHIFTof-NP, SHIFT-a team-NP, SHIFT-of-IN, SHIFT-Harvard University-NP, RIGHTARC, RIGHTARC, RIGHTARC, SHIFT-.-., RIGHTARC, LEFTARC), given the unordered bag of words (Dr. Talcott, led, a team, of, Harvard University, .).", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 47, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Transition-Based Linearization", "sec_num": "3" }, { "text": "The deduction system for the linearization algorithm is shown in Figure 3 . Given an input bag of n words, this algorithm also takes 2n transition actions to construct an output, by the same reason as the arc-standard parser.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 73, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Transition-Based Linearization", "sec_num": "3" }, { "text": "We apply the learning and search framework of Zhang and Clark (2011a) , which gives state-of-the-Algorithm 1: transition-based linearization Input: C, a set of input syntactic constraints Output: The highest-scored final state", "cite_spans": [ { "start": 46, "end": 69, "text": "Zhang and Clark (2011a)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Search and Learning", "sec_num": "3.1" }, { "text": "1 candidates \u2190 ([ ], set(1..n), \u2205) 2 agenda \u2190 \u2205 3 for i \u2190 1..2n do 4 for s in candidates do 5 for action in GETPOSSIBLEACTIONS(s, C) do 6 agenda \u2190 APPLY(s, action) 7 candidates \u2190 TOP-K(agenda) 8 agenda \u2190 \u2205 9 best \u2190 BEST(candidates) 10 return best", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search and Learning", "sec_num": "3.1" }, { "text": "art transition-based parsing accuracies and runs in linear time (Zhang and Nivre, 2011) . Pseudocode of the search algorithm is shown in Algorithm 1. It performs beam-search by using an agenda to keep the k-best states at each incremental step. When decoding starts, the agenda contains only the initial state. At each step, each state in the agenda is advanced by applying all possible transition actions (GETPOSSI-BLEACTIONS), leading to a set of new states. The k best are selected for the new states, and used to replace the current states in the agenda, before the next decoding step starts. Given an input bag of n words, the process repeats for 2n steps, after which all the states in the agenda are terminal states, and the highest-scored state in the agenda is taken for the final output. The complexity of this algorithm is n 2 , because it takes a fixed 2n steps to construct an output, and in each step the number of possible SHIFT action is proportional to the size of \u03c1.", "cite_spans": [ { "start": 64, "end": 87, "text": "(Zhang and Nivre, 2011)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Search and Learning", "sec_num": "3.1" }, { "text": "The search algorithm ranks search hypotheses, which are sequences of state transitions, by their scores. A global linear model is used to score search hypotheses. Given a hypothesis h, its score is calculated by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search and Learning", "sec_num": "3.1" }, { "text": "Score(h) = \u03a6(h) \u2022 \u20d7 \u03b8,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search and Learning", "sec_num": "3.1" }, { "text": "where \u20d7 \u03b8 is the parameter vector of the model and \u03a6(h) is the global feature vector of h, extracted by instantiating the feature templates in Table 2 according to each state in the transition sequence.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 150, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Search and Learning", "sec_num": "3.1" }, { "text": "In the table, S 0 represents the first word on the top of the stack, S 1 represents the second word on the top of the stack, w represents a word and p rep- Zhang and Nivre (2011) , which capture context information for S 0 , S 1 and their modifiers. The original feature templates of Zhang and Nivre (2011) also contain information of the front words on the buffer. However, since the buffer is unordered for linearization, we do not include these features.", "cite_spans": [ { "start": 156, "end": 178, "text": "Zhang and Nivre (2011)", "ref_id": "BIBREF22" }, { "start": 284, "end": 306, "text": "Zhang and Nivre (2011)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Search and Learning", "sec_num": "3.1" }, { "text": "Unigrams S 0 w; S 0 p; S 0,l w; S 0,l p; S 0,r w; S 0,r p; S 0,l2 w; S 0,l2 p; S 0,r2 w; S 0,r2 p; S 1 w; S 1 p; S 1,l w; S 1,l p; S 1,r w; S 1,r p; S 1,l2 w; S 1,l2 p; S 1,r2 w; S 1,r2 p; Bigram S 0 wS 0,l w; S 0 wS 0,l p; S 0 pS 0,l w; S 0 pS 0,l p; S 0 wS 0,r w; S 0 wS 0,r p; S 0 pS 0,r w; S 0 pS 0,r p; S 1 wS 1,l w; S 1 wS 1,l p; S 1 pS 1,l w; S 1 pS 1,l p; S 1 wS 1,r w; S 1 wS 1,r p; S 1 pS 1,r w; S 1 pS 1,r p; S 0 wS 1 w; S 0 wS 1 p; S 0 pS 1 w; S 0 pS 1 p Trigram S 0 wS 0 pS 0,l w; S 0 wS 0,l wS 0,l p; S 0 wS 0 pS 0,l p; S 0 pS 0,l wS 0,l p; S 0 wS 0 pS 0,r w; S 0 wS 0,l wS 0,r p; S 0 wS 0 pS 0,r p; S 0 pS 0,r wS 0,r p; S 1 wS 1 pS 1,l w; S 1 wS 1,l wS 1,l p; S 1 wS 1 pS 1,l p; S 1 pS 1,l wS 1,l p; S 1 wS 1 pS 1,r w; S 1 wS 1,l wS 1,r p; S 1 wS 1 pS 1,r p; S 1 pS 1,r wS 1,r p; Linearizion w 0 ; p 0 ; w \u22121 w 0 ; p \u22121 p 0 ; w \u22122 w \u22121 w 0 ; p \u22122 p \u22121 p 0 ; S 0,l wS 0,l2 w; S 0,l pS 0,l2 p; S 0,r2 wS 0,r w; S 0,r2 pS 0,r p; S 1,l wS 1,l2 w; S 1,l pS 1,l2 p; S 1,r2 wS 1,r w; S 1,r2 pS 1,r p;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search and Learning", "sec_num": "3.1" }, { "text": "The linearization feature templates are specific for linearization, and captures surface ngram information. Each search state represents a partially linearized sentence. We represents the last word in the partially linearized sentence as w 0 and the second last as w \u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search and Learning", "sec_num": "3.1" }, { "text": "Given a set of labeled training examples, the averaged perceptron (Collins, 2002) with early update (Collins and Roark, 2004; Zhang and Nivre, 2011) is used to train the parameters \u20d7 \u03b8 of the model.", "cite_spans": [ { "start": 66, "end": 81, "text": "(Collins, 2002)", "ref_id": "BIBREF3" }, { "start": 100, "end": 125, "text": "(Collins and Roark, 2004;", "ref_id": "BIBREF2" }, { "start": 126, "end": 148, "text": "Zhang and Nivre, 2011)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Search and Learning", "sec_num": "3.1" }, { "text": "The use of syntactic constraints to achieve better linearization performance has been studied in previous work. Wan et al. (2009) in learning a dependency language model. Zhang and Clark (2011b) take supertags as constraints to a CCG linearizer. Zhang (2013) demonstrates the possibility of partial-tree linearization, which allows a whole spectrum of input syntactic constraints. In practice, input syntactic constraints, including POS and dependency relations, can be obtained from earlier stage of a generation pipeline, such as lexical transfer results in machine translation.", "cite_spans": [ { "start": 112, "end": 129, "text": "Wan et al. (2009)", "ref_id": "BIBREF15" }, { "start": 246, "end": 258, "text": "Zhang (2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Input Syntactic Constraints", "sec_num": "3.2" }, { "text": "It is relatively straightforward to apply input constraints to a best-first system (Zhang, 2013) , but less so for beam-search. In this section, we utilize the input syntactic constraints by letting the information decide the possible actions for each state, namely the return value of GETPOSSIBLEACTIONS in Algorithm 1, thus, when input POS-tags and dependencies are given, the generation system can achieve more specified outputs.", "cite_spans": [ { "start": 83, "end": 96, "text": "(Zhang, 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Input Syntactic Constraints", "sec_num": "3.2" }, { "text": "POS is the simplest form of constraints to the transition-based linearization system. When the POS of an input word is given, the POS-tag component in SHIFT-Word-POS operation is fixed, and the number of SHIFT actions for the word is reduced from the number of all POS to 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Constraints", "sec_num": "3.2.1" }, { "text": "In partial tree linearization, a set of dependency arcs that form a partial dependency tree is given to the linearization system as input constraints. Figure 4 illustrate an example. The search space can be reduced by ignoring the transition sequences that do not result in a dependency tree that is consistent with the input constraints. Take the partial tree in Figure 4 for example. At the state s = ([Harvard University 5 ], set(1..n)-{5}, \u2205), it is illegal to shift the base phrase a team 3 onto the stack, be-Algorithm 2: GETPOSSIBLEACTIONS for partial tree linearization, where C is a partial tree Input: A state s = ([\u03c3|j i], \u03c1, A) and partial tree C Output: A set of possible transition actions T 1 if s.\u03c3 is empty then", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 159, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 364, "end": 372, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Partial Tree Constraints", "sec_num": "3.2.2" }, { "text": "2 for k \u2208 s.\u03c1 do 3 T \u2190 T \u222a (SHIFT, P OS, k) 4 else 5 if REDUCABLE(s, i, j, C) then 6 T \u2190 T \u222a (LEFTARC) 7 if REDUCABLE(s, j, i, C) then 8 T \u2190 T \u222a (RIGHTARC) 9 for k \u2208 s.\u03b2 do 10 if SHIFTLEGAL(s, k, C) then 11 T \u2190 T \u222a (SHIFT, P OS, k) 12 return T . . stack \u03c3 . . . . . 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Tree Constraints", "sec_num": "3.2.2" }, { "text": "3 . cause this action will result in a sub-sequence (Harvard University 5 , a team 3 , of 4 ), which cannot have the dependency arcs {3 \u2192 4}, {4 \u2192 5} by using arc-standard actions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Tree Constraints", "sec_num": "3.2.2" }, { "text": "Algorithm 3 shows pseudocode of GETPOSSI-BLEACTIONS when C is a partial tree. Given a state s = ([\u03c3|j i], \u03c1, A) the LEFTARC action builds an arc {j \u2190 i} and pops the word j off the stack. Since the popped word j cannot be linked to any words in future transitions, all the descendants of j should have been processed and removed from the stack. In addition, constrained by the given partial tree, the arc {j \u2190 i} should be an arc in C (Figure 5a) , or j should be the root of a sub dependency tree in C (Figure 5b) . We denote the conditions as REDUCABLE(s, i, j, C) (lines 5-6). The case for RIGHTARC is similar to LEFTARC (lines 7-8).", "cite_spans": [], "ref_spans": [ { "start": 435, "end": 446, "text": "(Figure 5a)", "ref_id": "FIGREF6" }, { "start": 503, "end": 514, "text": "(Figure 5b)", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Partial Tree Constraints", "sec_num": "3.2.2" }, { "text": "For the SHIFT action, the conditions are more complex. Due to space limitation, we briefly sketch the SHIFTLEGAL function below. Detailed algorithm pseudocode for SHIFTLEGAL is given in the supplementing material. For a word k in \u03c1 to be shifted onto the stack, all the words on the stack must satisfy certain constraints. There are 5 possible relations between k and a word l on the stack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Tree Constraints", "sec_num": "3.2.2" }, { "text": "(1) If l is a child of k in C (Figure 6a ), all the words on the stack from l to the top of the stack should be reducable to k, because only LEFTARC can be applied between k and these words in future actions.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 40, "text": "(Figure 6a", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Partial Tree Constraints", "sec_num": "3.2.2" }, { "text": "(2) If l is a grand child of k (Figure 6b ), no legal sentence can be constructed if k is shifted onto the stack. (3) If l is the parent of k (Figure 6c ), legal SHIFTs require all the words on the stack from l to the top to be reducable to k. (4) If l is a grand parent of k, all the words on the stack from l to the top will become descendants of l in the output (Figure 6e ). Thus these words must be descendants of l in C, or the root of different subdependency trees. (5) If l is a siblings of k, we denote a as the least common ancestor of k and l. a will become in the buffer and l should be a direct child of a. All the words from l to the top of the stack should be the descendants of a in the output (Figure 6d ), and thus a should have the same conditions as in (4). Finally, if no word on the stack is in the same subdependency tree as k in C, then k can be safely shifted.", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 41, "text": "(Figure 6b", "ref_id": "FIGREF7" }, { "start": 142, "end": 152, "text": "(Figure 6c", "ref_id": "FIGREF7" }, { "start": 365, "end": 375, "text": "(Figure 6e", "ref_id": "FIGREF7" }, { "start": 710, "end": 720, "text": "(Figure 6d", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Partial Tree Constraints", "sec_num": "3.2.2" }, { "text": "Algorithm 3: GETPOSSIBLEACTIONS for full tree linearization, where C is a full tree", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Tree Constraints", "sec_num": "3.2.2" }, { "text": "Input: A state s = ([\u03c3|j i], \u03c1, A) and gold tree C Output: A set of possible transition actions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Tree Constraints", "sec_num": "3.2.2" }, { "text": "T 1 T \u2190 \u2205 2 if s.\u03c3 is empty then 3 for k \u2208 s.\u03c1 do 4 T \u2190 T \u222a (SHIFT, P OS, k) 5 else 6 if \u2203j, j \u2208 (DESCENDANTS(i) \u2229 s.\u03c1) then 7 for j \u2208 (DESCENDANTS(i) \u2229 s.\u03c1) do 8 T \u2190 T \u222a (SHIFT, P OS, j) 9 else 10 if {j \u2192 i} \u2208 C then 11 T \u2190 T \u222a (RIGHTARC) 12 else if {j \u2190 i} \u2208 C then 13 T \u2190 T \u222a (LEFTARC) 14 else 15 for k \u2208 (SIBLINGS(i) \u222a HEAD(i)) \u2229 s.\u03c1 do 16 T \u2190 T \u222a (SHIFT, P OS, k) 17 return T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Tree Constraints", "sec_num": "3.2.2" }, { "text": "Algorithm 2 can also be used with full-tree constraints, which are a special case of partial-tree constraints. However, there is a conceptually simpler algorithm that leverages full-tree constraints. Because tree linearization is frequently studied in the literature, we describe this algorithm in Algorithm 3. When the stack is empty, we can freely move any word in the buffer \u03c1 onto the stack (line 2-4). If not all the descendants of the stack top i have been processed, the next transition actions should move them onto the stack, so that arcs can be constructed between i and these words (line 6-8). If all the descendants of i have been processed, the next action should eagerly build arcs between top two words i and j on the stack (line 10-13). If no arc exists between i and j, the next action should shift the parent word of i or a word in i's sibling tree (line 14-16).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Full Tree Constraints", "sec_num": "3.2.3" }, { "text": "We follow previous work and conduct experiments on the Penn Treebank (PTB), using Wall Street Jour- nal sections 2-21 for training, 22 for development testing and 23 for final testing. Gold-standard dependency trees are derived from bracketed sentences in the treebank using Penn2Malt 1 , and base noun phrases are treated as a single word (Wan et al., 2009; Zhang, 2013) . The BLEU score (Papineni et al., 2002) is used to evaluate the performance of linearization, which has been adopted in former literals (Wan et al., 2009; White and Rajkumar, 2009; Zhang and Clark, 2011b) and recent shared-tasks (Belz et al., 2011) . We use our implementation of the best-first system of Zhang (2013) , which gives the state-of-the-art results, as the baseline.", "cite_spans": [ { "start": 340, "end": 358, "text": "(Wan et al., 2009;", "ref_id": "BIBREF15" }, { "start": 359, "end": 371, "text": "Zhang, 2013)", "ref_id": "BIBREF24" }, { "start": 389, "end": 412, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF10" }, { "start": 509, "end": 527, "text": "(Wan et al., 2009;", "ref_id": "BIBREF15" }, { "start": 528, "end": 553, "text": "White and Rajkumar, 2009;", "ref_id": "BIBREF16" }, { "start": 554, "end": 577, "text": "Zhang and Clark, 2011b)", "ref_id": "BIBREF21" }, { "start": 602, "end": 621, "text": "(Belz et al., 2011)", "ref_id": "BIBREF0" }, { "start": 678, "end": 690, "text": "Zhang (2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We first study the influence of beam size by performing free word ordering on the development test data. BLEU score curves with different beam sizes are shown in Figure 7 . From this figure, we can see that the systems with beam 64 and 128 achieve the best results. However, the 128-beam system does not improve the performance significantly (48.2 vs 47.5), but runs twice slower. As a result, we set the beam size to 64 in the remaining experiments.", "cite_spans": [], "ref_spans": [ { "start": 162, "end": 170, "text": "Figure 7", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Influence of Beam size", "sec_num": "4.1" }, { "text": "To test the effectiveness of GETPOSSIBLEACTIONS under different input constraints, we follow Zhang (2013) BLEU scores along with the average time to order one sentence are shown in Table 3 .", "cite_spans": [ { "start": 93, "end": 105, "text": "Zhang (2013)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 181, "end": 188, "text": "Table 3", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Input Syntactic Constraints", "sec_num": "4.2" }, { "text": "With more syntactic information in the input, our linearization system achieves better performance, showing that GETPOSSIBLEACTIONS can take advantage of the input constraints and yield more specified output. In addition, because input constraints reduce the search space, the systems with more syntactic information achieve faster decoding speeds. In comparison with Zhang (2013), the transition-based system achieves improved accuracies under the settings, and the decoding speed can be over two orders of magnitude faster (22ms vs. 4218ms). We give more detailed analysis next.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Syntactic Constraints", "sec_num": "4.2" }, { "text": "The beam-search linearizer takes a very different search strategy compared with best-first search, which affects the error distribution. As mentioned earlier, one problem of best-first is the lack of theoretical guarantee on time complexity. As a result, a time constraint is used and default output can be constructed when no full output is found (White, 2004b; Zhang and Clark, 2011b) . This may result in incomplete output sentences and intuitively, this problem is more severe for larger bag of words. In contrast, the transition-based linearization algorithm takes |2n| steps to generate a sentence and thus guarantees to order all the input words. Figure 8 shows the results by comparing the brevity scores (i.e. the number of words in the output divided by the number of words in reference sentence) on different sizes of inputs. Best-search can fail to order all the input words even on bags of 9 -11 words, and the case is more severe for larger bag of words. On the other hand, the transition-based method uses all the input words to generate output and the brevity score is constant 1. Since the BLEU score consists two parts: the n-gram precision and brevity, this comparison partly explains why the transition-based linearization algorithm achieves higher BLEU scores.", "cite_spans": [ { "start": 348, "end": 362, "text": "(White, 2004b;", "ref_id": "BIBREF18" }, { "start": 363, "end": 386, "text": "Zhang and Clark, 2011b)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 654, "end": 662, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Comparison with Best-First", "sec_num": "4.3" }, { "text": "To further compare the difference between the two systems, we evaluate the qualities of projective spans, which are dependency treelets. Both systems build outputs bottom-up by constructing projective spans, and a break-down of span accuracies against span sizes shows the effects of the different search algorithms. The results are shown in Table 4 . According to this table, the best-first system tends to construct smaller spans more precisely, but the recall is relatively lower. Overall, higher F-scores are achieved by the transition-based system.", "cite_spans": [], "ref_spans": [ { "start": 342, "end": 349, "text": "Table 4", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Comparison with Best-First", "sec_num": "4.3" }, { "text": "During the decoding process, the best-first system compares spans of different sizes and expands those that have higher scores. As a result, the number of expanded spans do not have a fixed correlation with the size, and there can be fewer but better small spans expanded. In contrast, the transition-based system models transition sequences rather than individual spans, and therefore the distribution of spans of different sizes in each hypothesis resembles that of the training data. Figure 9 verifies the analysis by counting the distributions of spans with respect to the length, in the search algorithms of the two systems and the gold dependency trees. The distribution of the transition-based system is closer to that of gold dependency trees, while the best-first system outputs less smaller spans and more longer ones. This explains the higher precision for the best-first system on smaller spans.", "cite_spans": [], "ref_spans": [ { "start": 487, "end": 495, "text": "Figure 9", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Comparison with Best-First", "sec_num": "4.3" }, { "text": "The final results on the test set of Penn Treebank are shown in Table 5 . Compared with previous studies, our transition-based linearization system achieves the best results on all the tests. Table 6 shows some example output sentences, when there are no input constraints. For longer sentences, the transitionbased method gives noticeably better results.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 5", "ref_id": "TABREF11" }, { "start": 192, "end": 199, "text": "Table 6", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Final Results", "sec_num": "4.4" }, { "text": "output BL ref.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Results", "sec_num": "4.4" }, { "text": "There is no asbestos in our products now . Z13 There is no asbestos now in our products . 43.5 ours There is now our products in no asbestos . 17.8 ref.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Results", "sec_num": "4.4" }, { "text": "Previously , watch imports were denied such duty-free treatment . Z13 such duty-free treatment Previously , watch imports were denied .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Results", "sec_num": "4.4" }, { "text": "ours Previously , watch imports were denied such duty-free treatment . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "67.6", "sec_num": null }, { "text": "The input to practical natural language generation (NLG) system (Reiter and Dale, 1997) can range from a bag of words and phrases to a bag of lemmas without punctuation (Belz et al., 2011) . The linearization module of this paper can serve as the final stage in a pipeline when the bag of words and their optional syntactic information are given. There has also been work to jointly perform linearization and morphological generation (Song et al., 2014) .", "cite_spans": [ { "start": 64, "end": 87, "text": "(Reiter and Dale, 1997)", "ref_id": "BIBREF12" }, { "start": 169, "end": 188, "text": "(Belz et al., 2011)", "ref_id": "BIBREF0" }, { "start": 434, "end": 453, "text": "(Song et al., 2014)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "There has been work on linearization with unlabeled and labeled dependency trees (He et al., 2009; Zhang, 2013) . These methods mostly use greedy or best-first algorithms to order each tree node. Our work is different by performing word ordering using a transition process.", "cite_spans": [ { "start": 81, "end": 98, "text": "(He et al., 2009;", "ref_id": "BIBREF5" }, { "start": 99, "end": 111, "text": "Zhang, 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Besides dependency grammar, linearization with other syntactic grammars, such as CFG and CCG (White and Rajkumar, 2009; Zhang and Clark, 2011b) , has also been studied. In this paper, we adopt the dependency grammar for transition-based linearization. However, since transition-based parsing algorithms has been successfully applied to different grammars, including CFG (Sagae et al., 2005) and CCG (Xu et al., 2014) , our linearization method can be applied to these grammars.", "cite_spans": [ { "start": 93, "end": 119, "text": "(White and Rajkumar, 2009;", "ref_id": "BIBREF16" }, { "start": 120, "end": 143, "text": "Zhang and Clark, 2011b)", "ref_id": "BIBREF21" }, { "start": 370, "end": 390, "text": "(Sagae et al., 2005)", "ref_id": "BIBREF13" }, { "start": 399, "end": 416, "text": "(Xu et al., 2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We studied transition-based syntactic linearization as an extension to transition-based parsing. Compared with best-first systems, the advantage of our transition-based algorithm includes bounded time complexity, and the guarantee to yield full sentences when given a bag of words. Experimental results show that our algorithm achieves improved accuracies, with significantly faster decoding speed compared with a state-of-the-art best-first baseline. We publicly release our code at http: //sourceforge.net/projects/zgen/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "For future work, we will study the incorporation of large-scale language models, and the integration of morphology generation and linearization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "http://stp.lingfil.uu.se/\u02dcnivre/research/Penn2Malt.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their constructive comments. This work was supported by the National Key Basic Research Program of China via grant 2014CB340503 and the Singapore Ministry of Education (MOE) AcRF Tier 2 grant T2MOE201301 and SRG ISTD 2012 038 from Singapore University of Technology and Design.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The first surface realisation shared task: Overview and evaluation results", "authors": [ { "first": "Anja", "middle": [], "last": "Belz", "suffix": "" }, { "first": "Mike", "middle": [], "last": "White", "suffix": "" }, { "first": "Dominic", "middle": [], "last": "Espinosa", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Kow", "suffix": "" }, { "first": "Deirdre", "middle": [], "last": "Hogan", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Stent", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "217--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anja Belz, Mike White, Dominic Espinosa, Eric Kow, Deirdre Hogan, and Amanda Stent. 2011. The first surface realisation shared task: Overview and evaluation results. In Proceedings of the Genera- tion Challenges Session at the 13th European Work- shop on Natural Language Generation, pages 217- 226, Nancy, France, September. Association for Com- putational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Broad coverage multilingual deep sentence generation with a stochastic multi-level realizer", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "Leo", "middle": [], "last": "Wanner", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Mill", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Burga", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "98--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet, Leo Wanner, Simon Mill, and Alicia Burga. 2010. Broad coverage multilingual deep sen- tence generation with a stochastic multi-level realizer. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 98-106, Beijing, China, August. Coling 2010 Orga- nizing Committee.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Incremental parsing with the perceptron algorithm", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume", "volume": "", "issue": "", "pages": "111--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the Association for Computa- tional Linguistics (ACL'04), Main Volume, pages 111- 118, Barcelona, Spain, July.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natu- ral Language Processing, pages 1-8. Association for Computational Linguistics, July.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Association for Computational Linguistics", "authors": [ { "first": "Marcus", "middle": [], "last": "Adri\u00e0 De Gispert", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Tomalin", "suffix": "" }, { "first": "", "middle": [], "last": "Byrne", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "259--268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adri\u00e0 de Gispert, Marcus Tomalin, and Bill Byrne. 2014. Word ordering with phrase-based grammars. In Pro- ceedings of the 14th Conference of the European Chapter of the Association for Computational Linguis- tics, pages 259-268, Gothenburg, Sweden, April. As- sociation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Dependency based chinese sentence realization", "authors": [ { "first": "Wei", "middle": [], "last": "He", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yuqing", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "809--816", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei He, Haifeng Wang, Yuqing Guo, and Ting Liu. 2009. Dependency based chinese sentence realization. In Proceedings of the Joint Conference of the 47th An- nual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 809-816, Suntec, Singapore, Au- gust. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Efficient thirdorder dependency parsers", "authors": [ { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Koo and Michael Collins. 2010. Efficient third- order dependency parsers. In Proceedings of the 48th", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 1-11, Uppsala, Sweden, July. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Online learning of approximate dependency parsing algorithms", "authors": [ { "first": "T", "middle": [], "last": "Ryan", "suffix": "" }, { "first": "", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "C", "middle": [ "N" ], "last": "Fernando", "suffix": "" }, { "first": "", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan T McDonald and Fernando CN Pereira. 2006. On- line learning of approximate dependency parsing algo- rithms. In EACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Algorithms for deterministic incremental dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "4", "pages": "513--553", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2008. Algorithms for deterministic incre- mental dependency parsing. Computational Linguis- tics, 34(4):513-553.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of 40th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of 40th", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylva- nia, USA, July. Association for Computational Lin- guistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Building applied natural language generation systems", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" } ], "year": 1997, "venue": "Nat. Lang. Eng", "volume": "3", "issue": "1", "pages": "57--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Nat. Lang. Eng., 3(1):57-87, March.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic measurement of syntactic development in child language", "authors": [ { "first": "Kenji", "middle": [], "last": "Sagae", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Macwhinney", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", "volume": "", "issue": "", "pages": "197--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Sagae, Alon Lavie, and Brian MacWhinney. 2005. Automatic measurement of syntactic development in child language. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguis- tics (ACL'05), pages 197-204, Ann Arbor, Michigan, June. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Joint morphological generation and syntactic linearization", "authors": [ { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Song", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "AAAI", "volume": "", "issue": "", "pages": "1522--1528", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linfeng Song, Yue Zhang, Kai Song, and Qun Liu. 2014. Joint morphological generation and syntactic linearization. In AAAI, pages 1522-1528.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Improving grammaticality in statistical sentence generation: Introducing a dependency spanning tree algorithm with an argument satisfaction model", "authors": [ { "first": "Stephen", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dras", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" }, { "first": "C\u00e9cile", "middle": [], "last": "Paris", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)", "volume": "", "issue": "", "pages": "852--860", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Wan, Mark Dras, Robert Dale, and C\u00e9cile Paris. 2009. Improving grammaticality in statistical sen- tence generation: Introducing a dependency spanning tree algorithm with an argument satisfaction model. In Proceedings of the 12th Conference of the Euro- pean Chapter of the ACL (EACL 2009), pages 852- 860, Athens, Greece, March. Association for Compu- tational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Perceptron reranking for CCG realization", "authors": [ { "first": "Michael", "middle": [], "last": "White", "suffix": "" }, { "first": "Rajakrishnan", "middle": [], "last": "Rajkumar", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "410--419", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael White and Rajakrishnan Rajkumar. 2009. Per- ceptron reranking for CCG realization. In Proceedings of the 2009 Conference on Empirical Methods in Nat- ural Language Processing, pages 410-419, Singapore, August. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Reining in CCG chart realization", "authors": [ { "first": "Michael", "middle": [], "last": "White", "suffix": "" } ], "year": 2004, "venue": "Proc. INLG-04", "volume": "", "issue": "", "pages": "182--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael White. 2004a. Reining in CCG chart realiza- tion. In In Proc. INLG-04, pages 182-191.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Reining in ccg chart realization", "authors": [ { "first": "Michael", "middle": [], "last": "White", "suffix": "" } ], "year": 2004, "venue": "Natural Language Generation", "volume": "", "issue": "", "pages": "182--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael White. 2004b. Reining in ccg chart realiza- tion. In Natural Language Generation, pages 182- 191. Springer.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Shift-reduce ccg parsing with a dependency model", "authors": [ { "first": "Wenduan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "218--227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenduan Xu, Stephen Clark, and Yue Zhang. 2014. Shift-reduce ccg parsing with a dependency model. In Proceedings of the 52nd Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pages 218-227, Baltimore, Maryland, June. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Syntactic processing using the generalized perceptron and beam search", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2011, "venue": "Computational Linguistics", "volume": "37", "issue": "1", "pages": "105--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2011a. Syntactic process- ing using the generalized perceptron and beam search. Computational Linguistics, 37(1):105-151.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Syntax-based grammaticality improvement using CCG and guided search", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1147--1157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2011b. Syntax-based grammaticality improvement using CCG and guided search. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1147-1157, Edinburgh, Scotland, UK., July. As- sociation for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Transition-based dependency parsing with rich non-local features", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "188--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 188-193, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Syntax-based word ordering incorporating a large-scale language model", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Blackwood", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "736--746", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang, Graeme Blackwood, and Stephen Clark. 2012. Syntax-based word ordering incorporating a large-scale language model. In Proceedings of the 13th Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 736- 746, Avignon, France, April. Association for Compu- tational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Partial-tree linearization: Generalized word ordering for text synthesis", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2013, "venue": "IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang. 2013. Partial-tree linearization: Generalized word ordering for text synthesis. In IJCAI.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "The arc-standard parsing algorithm." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Example dependency tree." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": ". In this system, three transition actions are used: LEFT-ARC, RIGHTARC and SHIFT. Given a state s = ([\u03c3| j i], [k|\u03b2], A)," }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "Deduction system for transition-based linearization. Indices i, j do not reflect word order." }, "FIGREF4": { "type_str": "figure", "num": null, "uris": null, "text": "Example partial tree. Words in the same sub dependency trees are grouped by rounded boxes. Word indices do not specify their orders. Base phrases (e.g. Dr. Talcott) are treated as single words." }, "FIGREF6": { "type_str": "figure", "num": null, "uris": null, "text": "Two conditions for a valid LEFTARC action in partial-tree linearization. The indices correspond to those inFigure 4. A shaded triangle represents the readily built arcs under a root word." }, "FIGREF7": { "type_str": "figure", "num": null, "uris": null, "text": "5 relations between k and l. The indices correspond to those inFigure 4. The words in green boxes must have arcs with k in future transitions." }, "FIGREF8": { "type_str": "figure", "num": null, "uris": null, "text": "Dev. results with different beam sizes." }, "FIGREF9": { "type_str": "figure", "num": null, "uris": null, "text": "Distributions of spans outputted by the best-first, transition-based systems and the gold trees.no pos all pos all pos no dep no dep all dep Wan et al.(2009)-33.7 -Zhang and Clark (2011b) -40." }, "FIGREF10": { "type_str": "figure", "num": null, "uris": null, "text": "Despite recent declines in yields , investors continue to pour cash into money funds . Z13 continue yields investors pour to recent declines in cash , into money" }, "TABREF1": { "num": null, "content": "
. . NP . . Dr. Talcott 1. . VBD . . led 2. . NP . . a team 3.. . IN. . NP. . .
", "type_str": "table", "html": null, "text": ". . of 4 . . Harvard University 5 . . . 6 . . . . ." }, "TABREF4": { "num": null, "content": "", "type_str": "table", "html": null, "text": "Feature templates. resent a POS-tag. The feature templates can be classified into four types: unigram, bigram, trigram and linearization. The first three types are taken from the dependency parser of" }, "TABREF8": { "num": null, "content": "
1.000
0.975
0.950
0.925system bestfirst
0.900ours
1\u221289\u22121112\u22121415\u22121718\u22122021\u22122425\u22123233\u2212164
Figure 8: Comparison between transition-based and
best-first systems on surface string brevity.
PrecisionRecallF
lenZ13 ours Z13 ours Z13 ours
< 5 24.63 20.45 14.56 21.82 18.3 21.11
< 10 15.20 16.33 10.59 15.88 12.48 16.1
< 15 10.82 14.73 9.38 14.08 10.05 14.4
< 30 8.18 12.54 8.26 12.43 8.22 12.49
", "type_str": "table", "html": null, "text": "Partial-tree linearizion results on the development test set. BL -the BLEU score, SP -number of milliseconds to order one sentence. Z13 refers to the best-first system ofZhang (2013)." }, "TABREF9": { "num": null, "content": "", "type_str": "table", "html": null, "text": "Precision, recall and F-score comparison on different spans lengths." }, "TABREF11": { "num": null, "content": "
", "type_str": "table", "html": null, "text": "Final results." }, "TABREF12": { "num": null, "content": "
", "type_str": "table", "html": null, "text": "Example outputs." } } } }