Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:39:41.827229Z"
},
"title": "Neural Probabilistic Model for Non-projective MST Parsing",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "xuezhem@cs.cmu.edu"
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "hovy@cmu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a probabilistic parsing model that defines a proper conditional probability distribution over nonprojective dependency trees for a given sentence, using neural representations as inputs. The neural network architecture is based on bi-directional LSTM-CNNs, which automatically benefits from both word-and character-level representations, by using a combination of bidirectional LSTMs and CNNs. On top of the neural network, we introduce a probabilistic structured layer, defining a conditional log-linear model over nonprojective trees. By exploiting Kirchhoff's Matrix-Tree Theorem (Tutte, 1984), the partition functions and marginals can be computed efficiently, leading to a straightforward end-to-end model training procedure via back-propagation. We evaluate our model on 17 different datasets, across 14 different languages. Our parser achieves state-of-the-art parsing performance on nine datasets.",
"pdf_parse": {
"paper_id": "I17-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a probabilistic parsing model that defines a proper conditional probability distribution over nonprojective dependency trees for a given sentence, using neural representations as inputs. The neural network architecture is based on bi-directional LSTM-CNNs, which automatically benefits from both word-and character-level representations, by using a combination of bidirectional LSTMs and CNNs. On top of the neural network, we introduce a probabilistic structured layer, defining a conditional log-linear model over nonprojective trees. By exploiting Kirchhoff's Matrix-Tree Theorem (Tutte, 1984), the partition functions and marginals can be computed efficiently, leading to a straightforward end-to-end model training procedure via back-propagation. We evaluate our model on 17 different datasets, across 14 different languages. Our parser achieves state-of-the-art parsing performance on nine datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dependency parsing is one of the first stages in deep language understanding and has gained interest in the natural language processing (NLP) community, due to its usefulness in a wide range of applications. Many NLP systems, such as machine translation (Xie et al., 2011) , entity coreference resolution (Ng, 2010; Durrett and Klein, 2013; , low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) , and word sense disambiguation (Fauceglia et al., 2015) , are becoming more sophisticated, in part because of utilizing syntactic knowledge such as dependency parsing trees.",
"cite_spans": [
{
"start": 254,
"end": 272,
"text": "(Xie et al., 2011)",
"ref_id": "BIBREF61"
},
{
"start": 305,
"end": 315,
"text": "(Ng, 2010;",
"ref_id": "BIBREF50"
},
{
"start": 316,
"end": 340,
"text": "Durrett and Klein, 2013;",
"ref_id": "BIBREF17"
},
{
"start": 377,
"end": 400,
"text": "(McDonald et al., 2013;",
"ref_id": "BIBREF47"
},
{
"start": 401,
"end": 418,
"text": "Ma and Xia, 2014)",
"ref_id": "BIBREF40"
},
{
"start": 451,
"end": 475,
"text": "(Fauceglia et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dependency trees represent syntactic relationships through labeled directed edges between heads and their dependents (modifiers). In the past few years, several dependency parsing algorithms (Nivre and Scholz, 2004; McDonald et al., 2005b; Ma and Zhao, 2012a,b) have been proposed, whose high performance heavily rely on hand-crafted features and task-specific resources that are costly to develop, making dependency parsing models difficult to adapt to new languages or new domains.",
"cite_spans": [
{
"start": 191,
"end": 215,
"text": "(Nivre and Scholz, 2004;",
"ref_id": "BIBREF51"
},
{
"start": 216,
"end": 239,
"text": "McDonald et al., 2005b;",
"ref_id": "BIBREF48"
},
{
"start": 240,
"end": 261,
"text": "Ma and Zhao, 2012a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, non-linear neural networks, such as recurrent neural networks (RNNs) with long-short term memory (LSTM) and convolution neural networks (CNNs), with as input distributed word representations, also known as word embeddings, have been broadly applied, with great success, to NLP problems like part-of-speech (POS) tagging (Collobert et al., 2011) and named entity recognition (NER) (Chiu and Nichols, 2016) . By utilizing distributed representations as inputs, these systems are capable of learning hidden information representations directly from data instead of manually designing hand-crafted features, yielding end-to-end models . Previous studies explored the applicability of neural representations to traditional graph-based parsing models. Some work (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016) replaced the linear scoring function of each arc in traditional models with neural networks and used a margin-based objective (McDonald et al., 2005a) for model training. Other work (Zhang et al., 2016; Dozat and Manning, 2016) formalized dependency parsing as independently selecting the head of each word with cross-entropy objective, without the guarantee of a general non-projective tree structure output. Moreover, there have yet been no previous work on deriving a neural prob-abilistic parsing model to define a proper conditional distribution over non-projective trees for a given sentence.",
"cite_spans": [
{
"start": 330,
"end": 354,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF13"
},
{
"start": 390,
"end": 414,
"text": "(Chiu and Nichols, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 766,
"end": 798,
"text": "(Kiperwasser and Goldberg, 2016;",
"ref_id": "BIBREF26"
},
{
"start": 799,
"end": 820,
"text": "Wang and Chang, 2016)",
"ref_id": "BIBREF59"
},
{
"start": 947,
"end": 971,
"text": "(McDonald et al., 2005a)",
"ref_id": "BIBREF46"
},
{
"start": 1003,
"end": 1023,
"text": "(Zhang et al., 2016;",
"ref_id": "BIBREF65"
},
{
"start": 1024,
"end": 1048,
"text": "Dozat and Manning, 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a probabilistic neural network-based model for non-projective dependency parsing. This parsing model uses bi-directional LSTM-CNNs (BLSTM-CNNs) as backbone to learn neural information representations, on top of which a probabilistic structured layer is constructed with a conditional log-linear model, defining a conditional distribution over all non-projective dependency trees. The architecture of BLSTM-CNNs is similar to the one used for sequence labeling tasks , where CNNs encode character-level information of a word into its character-level representation and BLSTM models context information of each word. Due to the probabilistic structured output layer, we can use negative log-likelihood as the training objective, where the partition function and marginals can be computed via Kirchhoff's Matrix-Tree Theorem (Tutte, 1984) to process the optimization efficiently by back-propagation. At test time, parsing trees can be decoded with the maximum spanning tree (MST) algorithm (Mc-Donald et al., 2005b) . We evaluate our model on 17 treebanks across 14 different languages, achieving state-of-the-art performance on 9 treebanks. The contributions of this work are summarized as: (i) proposing a neural probabilistic model for non-projective dependency parsing. (ii) giving empirical evaluations of this model on benchmark data sets over 14 languages. (iii) achieving stateof-the-art performance with this parser on nine different treebanks.",
"cite_spans": [
{
"start": 816,
"end": 861,
"text": "Kirchhoff's Matrix-Tree Theorem (Tutte, 1984)",
"ref_id": null
},
{
"start": 1013,
"end": 1038,
"text": "(Mc-Donald et al., 2005b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we describe the components (layers) of our neural parsing model. We introduce the neural layers in our neural network one-by-one from top to bottom.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Probabilistic Parsing Model",
"sec_num": "2"
},
{
"text": "In this paper, we will use the following notation: x = {x 1 , . . . , x n } represents a generic input sentence, where x i is the ith word. y represents a generic (possibly non-projective) dependency tree, which represents syntactic relationships through labeled directed edges between heads and their dependents. For example, Figure 1 shows a dependency tree for the sentence, \"Economic news had little effect on financial markets\", with the sentences root-symbol as its root. T (x) is used to denote the set of possible dependency trees for sentence x.",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 335,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "The probabilistic model for dependency parsing defines a family of conditional probability p(y|x; \u0398) over all y given sentence x, with a loglinear form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "P (y|x; \u0398) = exp (x h ,xm)\u2208y \u03c6(x h , x m ; \u0398) Z(x; \u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "where \u0398 is the parameter of this model, s hm = \u03c6(x h , x m ; \u0398) is the score function of edge from x h to x m , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "Z(x; \u0398) = y\u2208T (x) exp \uf8eb \uf8ed (x h ,xm)\u2208y s hm \uf8f6 \uf8f8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "is the partition function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "Bi-Linear Score Function. In our model, we adopt a bi-linear form score function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "\u03c6(x h , x m ; \u0398) = \u03d5(x h ) T W\u03d5(x m ) +U T \u03d5(x h ) + V T \u03d5(x m ) + b where \u0398 = {W, U, V, b}, \u03d5(x i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "is the representation vector of x i , W, U, V denote the weight matrix of the bi-linear term and the two weight vectors of the linear terms in \u03c6, and b denotes the bias vector. As discussed in Dozat and Manning (2016) , the bi-linear form of score function is related to the bilinear attention mechanism (Luong et al., 2015) . The bi-linear score function differs from the traditional score function proposed in Kiperwasser and Goldberg (2016) by adding the bi-linear term. A similar score function is proposed in Dozat and Manning (2016) . The difference between their and our score function is that they only used the linear term for head words (U T \u03d5(x h )) while use them for both heads and modifiers.",
"cite_spans": [
{
"start": 193,
"end": 217,
"text": "Dozat and Manning (2016)",
"ref_id": "BIBREF16"
},
{
"start": 304,
"end": 324,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF33"
},
{
"start": 412,
"end": 443,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF26"
},
{
"start": 514,
"end": 538,
"text": "Dozat and Manning (2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "Matrix-Tree Theorem. In order to train the probabilistic parsing model, as discussed in Koo et al. (2007) , we have to compute the partition function and the marginals, requiring summation over the set T (x):",
"cite_spans": [
{
"start": 88,
"end": 105,
"text": "Koo et al. (2007)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "Z(x; \u0398) = y\u2208T (x) (x h ,xm)\u2208y \u03c8(x h , x m ; \u0398) \u00b5 h,m (x; \u0398) = y\u2208T (x):(x h ,xm)\u2208y P (y|x; \u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "where \u03c8(x h , x m ; \u0398) is the potential function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "\u03c8(x h , x m ; \u0398) = exp (\u03c6(x h , x m ; \u0398))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "and \u00b5 h,m (x; \u0398) is the marginal for edge from hth word to mth word for x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "Previous studies (Koo et al., 2007; Smith and Smith, 2007) have presented how a variant of Kirchhoff's Matrix-Tree Theorem (Tutte, 1984) can be used to evaluate the partition function and marginals efficiently. In this section, we briefly revisit this method.",
"cite_spans": [
{
"start": 17,
"end": 35,
"text": "(Koo et al., 2007;",
"ref_id": "BIBREF28"
},
{
"start": 36,
"end": 58,
"text": "Smith and Smith, 2007)",
"ref_id": "BIBREF55"
},
{
"start": 91,
"end": 136,
"text": "Kirchhoff's Matrix-Tree Theorem (Tutte, 1984)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "For a sentence x with n words, we denote x = {x 0 , x 1 , . . . , x n }, where x 0 is the root-symbol. We define a complete graph G on n + 1 nodes (including the root-symbol x 0 ), where each node corresponds to a word in x and each edge corresponds to a dependency arc between two words. Then, we assign non-negative weights to the edges of this complete graph with n + 1 nodes, yielding the weighted adjacency matrix A(\u0398) \u2208 R n+1\u00d7n+1 , for h, m = 0, . . . , n:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "A h,m (\u0398) = \u03c8(x h , x m ; \u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "Based on the adjacency matrix A(\u0398), we have the Laplacian matrix:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "L(\u0398) = D(\u0398) \u2212 A(\u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "where D(\u0398) is the weighted degree matrix:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "D h,m (\u0398) = \uf8f1 \uf8f2 \uf8f3 n h =0 A h ,m (\u0398) if h = m 0 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "Then, according to Theorem 1 in Koo et al. (2007) , the partition function is equal to the minor of L(\u0398) w.r.t row 0 and column 0:",
"cite_spans": [
{
"start": 32,
"end": 49,
"text": "Koo et al. (2007)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "Z(x; \u0398) = L (0,0) (\u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "where for a matrix A, A (h,m) denotes the minor of A w.r.t row h and column m; i.e., the determinant of the submatrix formed by deleting the hth row and mth column. The marginals can be computed by calculating the matrix inversion of the matrix corresponding to L (0,0) (\u0398). The time complexity of computing the partition function and marginals is O(n 3 ).",
"cite_spans": [
{
"start": 24,
"end": 29,
"text": "(h,m)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "Labeled Parsing Model. Though it is originally designed for unlabeled parsing, our probabilistic parsing model is easily extended to include dependency labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "In labeled dependency trees, each edge is represented by a tuple (x h , x m , l), where x h and x m are the head word and modifier, respectively, and l is the label of dependency type of this edge. Then we can extend the original model for labeled dependency parsing by extending the score function to include dependency labels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "\u03c6(x h , x m , l; \u0398) = \u03d5(x h ) T W l \u03d5(x m ) +U T l \u03d5(x h ) + V T l \u03d5(x m ) +b l where W l , U l , V l , b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "l are the weights and bias corresponding to dependency label l. Suppose that there are L different dependency labels, it suffices to define the new adjacency matrix by assigning the weight of a edge with the sum of weights over different dependency labels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "A h,m (\u0398) = L l=1 \u03c8(x h , x m , l; \u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "The partition function and marginals over labeled dependency trees are obtained by operating on the new adjacency matrix A (\u0398). The time complexity becomes O(n 3 + Ln 2 ). In practice, L is probably large. For English, the number of edge labels in Stanford Basic Dependencies (De Marneffe et al., 2006) is 45, and the number in the treebank of CoNLL-2008 shared task (Surdeanu et al., 2008) is 70. While, the average length of sentences in English Penn Treebank (Marcus et al., 1993) is around 23. Thus, L is not negligible comparing to n.",
"cite_spans": [
{
"start": 280,
"end": 302,
"text": "Marneffe et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 367,
"end": 390,
"text": "(Surdeanu et al., 2008)",
"ref_id": "BIBREF57"
},
{
"start": 462,
"end": 483,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "It should be noticed that in our labeled model, for different dependency label l we use the same vector representation \u03d5(x i ) for each word x i . The dependency labels are distinguished (only) by the parameters (weights and bias) corresponding to each of them. One advantage of this is that it significantly reduces the memory requirement comparing to the model in Dozat and Manning (2016) which distinguishes \u03d5 l (x i ) for different label l. Maximum Spanning Tree Decoding. The decoding problem of this parsing model can be formulated as:",
"cite_spans": [
{
"start": 366,
"end": 390,
"text": "Dozat and Manning (2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "y * = argmax y\u2208T (x) P (y|x; \u0398) = argmax y\u2208T (x) (x h ,xm)\u2208y \u03c6(x h , x m ; \u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "which can be solved by using the Maximum Spanning Tree (MST) algorithm described in McDonald et al. (2005b) .",
"cite_spans": [
{
"start": 84,
"end": 107,
"text": "McDonald et al. (2005b)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Edge-Factored Parsing Layer",
"sec_num": "2.1"
},
{
"text": "Now, the remaining question is how to obtain the vector representation of each word with a neural network. In the following subsections, we will describe the architecture of our neural network model for representation learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network for Representation Learning",
"sec_num": "2.2"
},
{
"text": "Previous work (Santos and Zadrozny, 2014) have shown that CNNs are an effective approach to extract morphological information (like the prefix or suffix of a word) from characters of words and encode it into neural representations, which has been proven particularly useful on Out-of-Vocabulary words (OOV). The CNN architecture our model uses to extract character-level representation of a given word is the same as the one used in . The CNN architecture is shown in Figure 2 . Following Ma and Hovy (2016), a dropout layer (Srivastava et al., 2014) is applied before character embeddings are input to CNN. (Mikolov et al., 2010) , sequence labeling and machine translation (Cho et al., 2014) , to capture context information in languages. Though, in theory, RNNs are able to learn long-distance dependencies, in practice, they fail due to the gradient vanishing/exploding problems (Bengio et al., 1994; Pascanu et al., 2013) .",
"cite_spans": [
{
"start": 525,
"end": 550,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF56"
},
{
"start": 608,
"end": 630,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF49"
},
{
"start": 675,
"end": 693,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 883,
"end": 904,
"text": "(Bengio et al., 1994;",
"ref_id": "BIBREF5"
},
{
"start": 905,
"end": 926,
"text": "Pascanu et al., 2013)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [
{
"start": 468,
"end": 476,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "CNNs",
"sec_num": "2.2.1"
},
{
"text": "LSTMs (Hochreiter and Schmidhuber, 1997) are variants of RNNs designed to cope with these gradient vanishing problems. Basically, a LSTM unit is composed of three multiplicative gates which control the proportions of information to pass and to forget on to the next time step.",
"cite_spans": [
{
"start": 6,
"end": 40,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CNNs",
"sec_num": "2.2.1"
},
{
"text": "BLSTM. Many linguistic structure prediction tasks can benefit from having access to both past (left) and future (right) contexts, while the LSTM's hidden state h t takes information only from past, knowing nothing about the future. An elegant solution whose effectiveness has been proven by previous work is bi-directional LSTM (BLSTM). The basic idea is to present each sequence forwards and backwards to two separate hidden states to capture past and future information, respectively. Then the two hidden states are concatenated to form the final output. As discussed in Dozat and Manning (2016) , there are more than one advantages to apply a multilayer perceptron (MLP) to the output vectors of BLSTM before the score function, eg. reducing the dimensionality and overfitting of the model. We follow this work by using a one-layer perceptron with elu (Clevert et al., 2015) as activation function.",
"cite_spans": [
{
"start": 573,
"end": 597,
"text": "Dozat and Manning (2016)",
"ref_id": "BIBREF16"
},
{
"start": 855,
"end": 877,
"text": "(Clevert et al., 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CNNs",
"sec_num": "2.2.1"
},
{
"text": "Finally, we construct our neural network model by feeding the output vectors of BLSTM (after MLP) into the parsing layer. Figure 3 illustrates the architecture of our network in detail.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 130,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "BLSTM-CNNs",
"sec_num": "2.3"
},
{
"text": "For each word, the CNN in Figure 2 , with character embeddings as inputs, encodes the characterlevel representation. Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network. To enrich word-level information, we also use POS embeddings. Finally, the output vec- Figure 3 : The main architecture of our parsing model. The character representation for each word is computed by the CNN in Figure 2 . Then the character representation vector is concatenated with the word and pos embedding before feeding into the BLSTM network. Dashed arrows indicate dropout layers applied on the input, hidden and output vectors of BLSTM. tors of the neural netwok are fed to the parsing layer to jointly parse the best (labeled) dependency tree. As shown in Figure 3 , dropout layers are applied on the input, hidden and output vectors of BLSTM, using the form of recurrent dropout proposed in Gal and Ghahramani (2016) .",
"cite_spans": [
{
"start": 945,
"end": 970,
"text": "Gal and Ghahramani (2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 330,
"end": 338,
"text": "Figure 3",
"ref_id": null
},
{
"start": 454,
"end": 462,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 809,
"end": 817,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "BLSTM-CNNs",
"sec_num": "2.3"
},
{
"text": "In this section, we provide details about implementing and training the neural parsing model, including parameter initialization, model optimization and hyper parameter selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network Training",
"sec_num": "3"
},
{
"text": "Word Embeddings. For all the parsing models on different languages, we initialize word vectors with pretrained word embeddings. For Chi- , where r and c are the number of of rows and columns in the structure (Glorot and Bengio, 2010) . Bias vectors are initialized to zero, except the bias b f for the forget gate in LSTM , which is initialized to 1.0 (Jozefowicz et al., 2015) .",
"cite_spans": [
{
"start": 208,
"end": 233,
"text": "(Glorot and Bengio, 2010)",
"ref_id": "BIBREF21"
},
{
"start": 352,
"end": 377,
"text": "(Jozefowicz et al., 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Initialization",
"sec_num": "3.1"
},
{
"text": "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with \u03b21 = \u03b22 = 0.9. We choose an initial learning rate of \u03b7 0 = 0.002. The learning rate \u03b7 was adapted using a schedule S = [e 1 , e 2 , . . . , e s ], in which the learning rate \u03b7 is annealed by multiplying a fixed decay rate \u03c1 = 0.5 after e i \u2208 S epochs respectively. We used S = [10, 30, 50, 70, 100] and trained all networks for a total of 120 epochs. While the Adam optimizer automatically adjusts the global learning rate according to past gradient magnitudes, we find that this additional decay consistently improves model performance across all settings and languages. To reduce the effects of \"gradient exploding\", we use a gradient clipping of 5.0 (Pascanu et al., 2013) . We explored other optimization algorithms such as stochastic gradient descent (SGD) with momentum, AdaDelta (Zeiler, 2012), or RMSProp (Dauphin et al., 2015) , but none of them meaningfully improve upon Adam with learning rate annealing in our preliminary experiments.",
"cite_spans": [
{
"start": 60,
"end": 81,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF25"
},
{
"start": 364,
"end": 368,
"text": "[10,",
"ref_id": null
},
{
"start": 369,
"end": 372,
"text": "30,",
"ref_id": null
},
{
"start": 373,
"end": 376,
"text": "50,",
"ref_id": null
},
{
"start": 377,
"end": 380,
"text": "70,",
"ref_id": null
},
{
"start": 381,
"end": 385,
"text": "100]",
"ref_id": null
},
{
"start": 740,
"end": 762,
"text": "(Pascanu et al., 2013)",
"ref_id": "BIBREF52"
},
{
"start": 892,
"end": 922,
"text": "RMSProp (Dauphin et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization Algorithm",
"sec_num": "3.2"
},
{
"text": "Dropout Training. To mitigate overfitting, we apply the dropout method (Srivastava et al., 2014; Ma et al., 2017) to regularize our model. As shown in Figure 2 and 3, we apply dropout on character embeddings before inputting to CNN, and on the input, hidden and output vectors of BLSTM. We apply dropout rate of 0.15 to all the embeddings. For BLSTM, we use the recurrent dropout (Gal and Ghahramani, 2016) with 0.25 dropout rate between hidden states and 0.33 between layers.",
"cite_spans": [
{
"start": 71,
"end": 96,
"text": "(Srivastava et al., 2014;",
"ref_id": "BIBREF56"
},
{
"start": 97,
"end": 113,
"text": "Ma et al., 2017)",
"ref_id": "BIBREF34"
},
{
"start": 380,
"end": 406,
"text": "(Gal and Ghahramani, 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 151,
"end": 159,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Optimization Algorithm",
"sec_num": "3.2"
},
{
"text": "We found that the model using the new recurrent dropout converged much faster than standard dropout, while achiving similar performance. Table 1 summarizes the chosen hyper-parameters for all experiments. We tune the hyper-parameters on the development sets by random search. We use the same hyper-parameters across the models on different treebanks and languages, due to time constrains. Note that we use 2-layer BLSTM followed with 1-layer MLP. We set the state size of LSTM to 256 and the dimension of MLP to 100. Tuning these two parameters did not significantly impact the performance of our model. ",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Optimization Algorithm",
"sec_num": "3.2"
},
{
"text": "We evaluate our neural probabilistic parser on the same data setup as Kuncoro et al. (2016) , namely the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Haji\u010d et al., 2009) . Following previous work, all experiments are evaluated on the metrics of unlabeled attachment score (UAS) and Labeled attachment score (LAS).",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "Kuncoro et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 145,
"end": 166,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF43"
},
{
"start": 213,
"end": 231,
"text": "(Xue et al., 2002)",
"ref_id": "BIBREF62"
},
{
"start": 267,
"end": 287,
"text": "(Haji\u010d et al., 2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "We first construct experiments to dissect the effectiveness of each input information (embeddings) of our neural network architecture by ablation studies. We compare the performance of four versions of our model with different inputs -Basic, +POS, +Char and Full -where the Basic model utilizes only the pretrained word embeddings as inputs, while the +POS and +Char models augments the basic one with POS embedding and character information, respectively. According to the results shown in for Chinese than English and German. Table 3 gives the performance on PTB of the parsers trained with two different objective functions -the cross-entropy objective of each word, and our objective based on likelihood for an entire tree. The parser with global likelihood objective outperforms the one with simple crossentropy objective, demonstrating the effectiveness of the global structured objective. Table 4 illustrates the results of the four versions of our model on the three languages, together with twelve previous top-performance systems for comparison. Our Full model significantly outperforms the graph-based parser proposed in Kiperwasser and Goldberg (2016) which used similar neural network architecture for representation learning (detailed discussion in Section 5). Moreover, our model achieves better results than the parser distillation method (Kuncoro et al., 2016) on all the three languages. The results of our parser are slightly worse than the scores reported in Dozat and Manning (2016) . One possible reason is that, as mentioned in Section 2.1, for labeled dependency parsing Dozat and Manning (2016) used different vectors for different dependency labels to represent each word, making their model require much more memory than ours.",
"cite_spans": [
{
"start": 1355,
"end": 1377,
"text": "(Kuncoro et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 1479,
"end": 1503,
"text": "Dozat and Manning (2016)",
"ref_id": "BIBREF16"
},
{
"start": 1595,
"end": 1619,
"text": "Dozat and Manning (2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 528,
"end": 535,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 896,
"end": 903,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.2"
},
{
"text": "Datasets. To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task on dependency parsing -the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) . For the treebanks from CoNLL-2006 shared task, following , we randomly select 5% of the training data as the development set. UAS and LAS are evaluated using the official scorer 1 of CoNLL-2006 shared task.",
"cite_spans": [
{
"start": 209,
"end": 232,
"text": "(Surdeanu et al., 2008)",
"ref_id": "BIBREF57"
},
{
"start": 282,
"end": 308,
"text": "(Buchholz and Marsi, 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on CoNLL Treebanks",
"sec_num": "4.4"
},
{
"text": "Baselines. We compare our model with the third-order Turbo parser (Martins et al., 2013) , the low-rank tensor based model (Tensor) , the randomized greedy inference based (RGB) model , the labeled dependency parser with inner-to-outer greedy decoding algorithm (In-Out) , and the bi-direction attention based parser (Bi-Att) . We also compare our parser against the best published results for individual languages. This comparison includes four additional systems: , Martins et al. (2011) , Zhang and McDonald (2014) and Pitler and McDonald (2015) . Results. Table 5 summarizes the results of our model, along with the state-of-the-art baselines. On average across 14 languages, our approach significantly outperforms all the baseline systems. It should be noted that the average UAS of our parser over the 14 languages is better than that of the \"best published\", which are from different systems that achieved best results for different languages. For individual languages, our parser achieves state-of-the-art performance on both UAS and LAS on 8 languages -Bulgarian, Chinese, Czech, Dutch, English, German, Japanese and Spanish. On Arabic, Danish, Portuguese, Slovene and Swedish, our parser obtains the best LAS. Another interesting observation is that the Full model outperforms the +POS model on 13 languages. The only exception is Chinese, which matches the observation in Section 4.2.",
"cite_spans": [
{
"start": 66,
"end": 88,
"text": "(Martins et al., 2013)",
"ref_id": "BIBREF44"
},
{
"start": 468,
"end": 489,
"text": "Martins et al. (2011)",
"ref_id": "BIBREF45"
},
{
"start": 492,
"end": 517,
"text": "Zhang and McDonald (2014)",
"ref_id": "BIBREF64"
},
{
"start": 522,
"end": 548,
"text": "Pitler and McDonald (2015)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [
{
"start": 560,
"end": 567,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments on CoNLL Treebanks",
"sec_num": "4.4"
},
{
"text": "In recent years, several different neural network based models have been proposed and successfully applied to dependency parsing. Among these neural models, there are three approaches most similar to our model -the two graphbased parsers with BLSTM feature representation (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016) , and the neural bi-affine attention parser (Dozat and Manning, 2016) . Kiperwasser and Goldberg (2016) proposed a graph-based dependency parser which uses BLSTM for word-level representations. Wang and Chang (2016) used a similar model with a way to learn sentence segment embedding based on an extra forward LSTM network. Both of these two parsers trained the parsing models by optimizing margin-based objectives. There are three main differences between their models and ours. First, they only used linear form score function, instead of using the bi-linear term between the vectors of heads and modifiers. Second, They did not employ CNNs to model character-level information. Third, we proposed a probabilistic model over non-projective trees on the top of neural representations, while they trained their models with a margin-based objective. Dozat and Manning (2016) proposed neural parsing model using bi-affine score function, which is similar to the bi-linear form score function in our model. Our model mainly differ from this model by using CNN to model character-level information. Moreover, their model formalized dependency parsing as independently selecting the head of each word with cross-entropy objective, while our probabilistic parsing model jointly encodes and decodes parsing trees for given sentences.",
"cite_spans": [
{
"start": 272,
"end": 304,
"text": "(Kiperwasser and Goldberg, 2016;",
"ref_id": "BIBREF26"
},
{
"start": 305,
"end": 326,
"text": "Wang and Chang, 2016)",
"ref_id": "BIBREF59"
},
{
"start": 371,
"end": 396,
"text": "(Dozat and Manning, 2016)",
"ref_id": "BIBREF16"
},
{
"start": 399,
"end": 430,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF26"
},
{
"start": 521,
"end": 542,
"text": "Wang and Chang (2016)",
"ref_id": "BIBREF59"
},
{
"start": 1176,
"end": 1200,
"text": "Dozat and Manning (2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we proposed a neural probabilistic model for non-projective dependency parsing, using the BLSTM-CNNs architecture for representation learning. Experimental results on 17 treebanks across 14 languages show that our parser significantly improves the accuracy of both dependency structures (UAS) and edge labels (LAS), over several previously state-of-the-art systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://ilk.uvt.nl/conll/software.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Polyglot: Distributed word representations for multilingual nlp",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of CoNLL-2013",
"volume": "",
"issue": "",
"pages": "183--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In Proceedings of CoNLL- 2013. Sofia, Bulgaria, pages 183-192.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Globally normalized transition-based neural networks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Presta",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL-2016",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally nor- malized transition-based neural networks. In Pro- ceedings of ACL-2016 (Volume 1: Long Papers).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Improved transition-based parsing by modeling characters instead of words with lstms",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP-2015",
"volume": "",
"issue": "",
"pages": "349--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by model- ing characters instead of words with lstms. In Pro- ceedings of EMNLP-2015. Lisbon, Portugal, pages 349-359.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Training with exploration improves a greedy stack lstm parser",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP-2016",
"volume": "",
"issue": "",
"pages": "2005--2010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A. Smith. 2016. Training with exploration im- proves a greedy stack lstm parser. In Proceedings of EMNLP-2016. Austin, Texas, pages 2005-2010.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning long-term dependencies with gradient descent is difficult",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Patrice",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Frasconi",
"suffix": ""
}
],
"year": 1994,
"venue": "IEEE Transactions on",
"volume": "5",
"issue": "2",
"pages": "157--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gra- dient descent is difficult. Neural Networks, IEEE Transactions on 5(2):157-166.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP-2012",
"volume": "",
"issue": "",
"pages": "1455--1465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet and Joakim Nivre. 2012. A transition- based system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Pro- ceedings of EMNLP-2012. Jeju Island, Korea, pages 1455-1465.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "CoNLL-X shared task on multilingual dependency parsing",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Buchholz",
"suffix": ""
},
{
"first": "Erwin",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceeding of CoNLL-2006",
"volume": "",
"issue": "",
"pages": "149--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceeding of CoNLL-2006. New York, NY, pages 149-164.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP-2014",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of EMNLP-2014. Doha, Qatar, pages 740-750.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bi-directional attention with agreement for dependency parsing",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP-2016",
"volume": "",
"issue": "",
"pages": "2204--2214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, and Li Deng. 2016. Bi-directional attention with agreement for dependency parsing. In Proceedings of EMNLP-2016. Austin, Texas, pages 2204-2214.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Named entity recognition with bidirectional lstm-cnns",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "357--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transac- tions of the Association for Computational Linguis- tics 4:357-370.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "On the properties of neural machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.1259"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. arXiv preprint arXiv:1409.1259 .",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Fast and accurate deep network learning by exponential linear units (elus)",
"authors": [
{
"first": "Djork-Arn\u00e9",
"middle": [],
"last": "Clevert",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Unterthiner",
"suffix": ""
},
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.07289"
]
},
"num": null,
"urls": [],
"raw_text": "Djork-Arn\u00e9 Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289 .",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research 12:2493-2537.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Rmsprop and equilibrated adaptive learning rates for non-convex optimization",
"authors": [
{
"first": "",
"middle": [],
"last": "Yann N Dauphin",
"suffix": ""
},
{
"first": "Junyoung",
"middle": [],
"last": "Harm De Vries",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1502.04390"
]
},
"num": null,
"urls": [],
"raw_text": "Yann N Dauphin, Harm de Vries, Junyoung Chung, and Yoshua Bengio. 2015. Rmsprop and equili- brated adaptive learning rates for non-convex opti- mization. arXiv preprint arXiv:1502.04390 .",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine De",
"middle": [],
"last": "Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC-2006",
"volume": "",
"issue": "",
"pages": "449--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine De Marneffe, Bill MacCartney, Christopher D. Manning, et al. 2006. Generat- ing typed dependency parses from phrase structure parses. In Proceedings of LREC-2006. pages 449- 454.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01734"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D. Manning. 2016. Deep biaffine attention for neural dependency pars- ing. arXiv preprint arXiv:1611.01734 .",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Easy victories and uphill battles in coreference resolution",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP-2013",
"volume": "",
"issue": "",
"pages": "1971--1982",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of EMNLP-2013. Seattle, Washington, USA, pages 1971-1982.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Transitionbased dependency parsing with stack long shortterm memory",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL-2015",
"volume": "1",
"issue": "",
"pages": "334--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short- term memory. In Proceedings of ACL-2015 (Volume 1: Long Papers). Beijing, China, pages 334-343.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Word sense disambiguation via propstore and ontonotes for event mention detection",
"authors": [
{
"first": "Yiu-Chang",
"middle": [],
"last": "Nicolas R Fauceglia",
"suffix": ""
},
{
"first": "Xuezhe",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation",
"volume": "",
"issue": "",
"pages": "11--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolas R Fauceglia, Yiu-Chang Lin, Xuezhe Ma, and Eduard Hovy. 2015. Word sense disambiguation via propstore and ontonotes for event mention detec- tion. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation. Denver, Colorado, pages 11-15.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A theoretically grounded application of dropout in recurrent neural networks",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Understanding the difficulty of training deep feedforward neural networks",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "International conference on artificial intelligence and statistics",
"volume": "",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics. pages 249-256.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Ant\u00f2nia"
],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jan\u0161t\u011bp\u00e1nek",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CoNLL-2009: Shared Task",
"volume": "",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, et al. 2009. The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of CoNLL- 2009: Shared Task. pages 1-18.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "An empirical exploration of recurrent network architectures",
"authors": [
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning (ICML-15)",
"volume": "",
"issue": "",
"pages": "2342--2350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recur- rent network architectures. In Proceedings of the 32nd International Conference on Machine Learn- ing (ICML-15). pages 2342-2350.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Simple and accurate dependency parsing using bidirectional lstm feature representations",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "313--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional lstm feature representations. Transactions of the Association for Computational Linguistics 4:313-327.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Efficient thirdorder dependency parsers",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL-2010",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo and Michael Collins. 2010. Efficient third- order dependency parsers. In Proceedings of ACL- 2010. Uppsala, Sweden, pages 1-11.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Structured prediction models via the matrix-tree theorem",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-2007",
"volume": "",
"issue": "",
"pages": "141--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction mod- els via the matrix-tree theorem. In Proceedings of EMNLP-2007. Prague, Czech Republic, pages 141- 150.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Dual decomposition for parsing with non-projective head automata",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP-2010",
"volume": "",
"issue": "",
"pages": "1288--1298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of EMNLP-2010. Cam- bridge, MA, pages 1288-1298.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Distilling an ensemble of greedy dependency parsers into one mst parser",
"authors": [
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP-2016",
"volume": "",
"issue": "",
"pages": "1744--1753",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Dis- tilling an ensemble of greedy dependency parsers into one mst parser. In Proceedings of EMNLP- 2016. Austin, Texas, pages 1744-1753.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Low-rank tensors for scoring dependency structures",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Xin",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL-2014",
"volume": "1",
"issue": "",
"pages": "1381--1391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scor- ing dependency structures. In Proceedings of ACL- 2014 (Volume 1: Long Papers). Baltimore, Mary- land, pages 1381-1391.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Two/too simple adaptations of word2vec for syntax problems",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL-2015",
"volume": "",
"issue": "",
"pages": "1299--1304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of NAACL-2015. Denver, Colorado, pages 1299-1304.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP-2015",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention- based neural machine translation. In Proceedings of EMNLP-2015. Lisbon, Portugal, pages 1412-1421.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Dropout with expectation-linear regularization",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yingkai",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Yaoliang",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma, Yingkai Gao, Zhiting Hu, Yaoliang Yu, Yuntian Deng, and Eduard Hovy. 2017. Dropout with expectation-linear regularization. In Proceed- ings of the 5th International Conference on Learn- ing Representations (ICLR-2017). Toulon, France.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Efficient inner-toouter greedy algorithm for higher-order labeled dependency parsing",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP-2015",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2015. Efficient inner-to- outer greedy algorithm for higher-order labeled de- pendency parsing. In Proceedings of EMNLP-2015.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL-2016",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In Proceedings of ACL-2016 (Volume 1: Long Papers).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Unsupervised ranking model for entity coreference resolution",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zhengzhong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-2016",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma, Zhengzhong Liu, and Eduard Hovy. 2016. Unsupervised ranking model for entity coreference resolution. In Proceedings of NAACL-2016. San Diego, California, USA.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL-2014",
"volume": "",
"issue": "",
"pages": "1337--1348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Fei Xia. 2014. Unsupervised depen- dency parsing with transferring distribution via par- allel guidance and entropy regularization. In Pro- ceedings of ACL-2014. Baltimore, Maryland, pages 1337-1348.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Fourth-order dependency parsing",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING 2012: Posters",
"volume": "",
"issue": "",
"pages": "785--796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Hai Zhao. 2012a. Fourth-order depen- dency parsing. In Proceedings of COLING 2012: Posters. Mumbai, India, pages 785-796.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Probabilistic models for high-order projective dependency parsing",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1502.04174"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Hai Zhao. 2012b. Probabilistic models for high-order projective dependency parsing. Tech- nical Report, arXiv:1502.04174 .",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Building a large annotated corpus of English: the Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computa- tional Linguistics 19(2):313-330.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Turning on the turbo: Fast third-order nonprojective turbo parsers",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL-2013",
"volume": "2",
"issue": "",
"pages": "617--622",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andre Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order non- projective turbo parsers. In Proceedings of ACL- 2013 (Volume 2: Short Papers). Sofia, Bulgaria, pages 617-622.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Dual decomposition with many overlapping components",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Figueiredo",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Aguiar",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP-2011",
"volume": "",
"issue": "",
"pages": "238--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andre Martins, Noah Smith, Mario Figueiredo, and Pedro Aguiar. 2011. Dual decomposition with many overlapping components. In Proceedings of EMNLP-2011. Edinburgh, Scotland, UK., pages 238-249.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL-2005",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of de- pendency parsers. In Proceedings of ACL-2005. Ann Arbor, Michigan, USA, pages 91-98.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Universal dependency annotation for multilingual parsing",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Yvonne",
"middle": [],
"last": "Quirmbach-Brundage",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Bedini",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL-2013",
"volume": "",
"issue": "",
"pages": "92--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuz- man Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria Bertomeu Castell\u00f3, and Jungmee Lee. 2013. Uni- versal dependency annotation for multilingual pars- ing. In Proceedings of ACL-2013. Sofia, Bulgaria, pages 92-97.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Non-projective dependency parsing using spanning tree algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Kiril",
"middle": [],
"last": "Ribarov",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT/EMNLP-2005",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005b. Non-projective dependency pars- ing using spanning tree algorithms. In Proceedings of HLT/EMNLP-2005. Vancouver, Canada, pages 523-530.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
}
],
"year": 2010,
"venue": "Interspeech",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Inter- speech. volume 2, page 3.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Supervised noun phrase coreference research: The first fifteen years",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL-2010. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1396--1411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of ACL-2010. Association for Computational Linguis- tics, Uppsala, Sweden, pages 1396-1411.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Deterministic dependency parsing of English text",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Scholz",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING-2004",
"volume": "",
"issue": "",
"pages": "64--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of COLING-2004. Geneva, Switzerland, pages 64- 70.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "On the difficulty of training recurrent neural networks",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ICML-2013",
"volume": "",
"issue": "",
"pages": "1310--1318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neu- ral networks. In Proceedings of ICML-2013. pages 1310-1318.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "A linear-time transition system for crossing interval trees",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL-2015",
"volume": "",
"issue": "",
"pages": "662--671",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler and Ryan McDonald. 2015. A linear-time transition system for crossing interval trees. In Pro- ceedings of NAACL-2015. Denver, Colorado, pages 662-671.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Learning character-level representations for part-of-speech tagging",
"authors": [
{
"first": "D",
"middle": [],
"last": "Cicero",
"suffix": ""
},
{
"first": "Bianca",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zadrozny",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ICML-2014",
"volume": "",
"issue": "",
"pages": "1818--1826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cicero D Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In Proceedings of ICML-2014. pages 1818-1826.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Probabilistic models of nonprojective dependency trees",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-2007. Prague, Czech Republic",
"volume": "",
"issue": "",
"pages": "132--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A. Smith and Noah A. Smith. 2007. Proba- bilistic models of nonprojective dependency trees. In Proceedings of EMNLP-2007. Prague, Czech Re- public, pages 132-140.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929-1958.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "The conll-2008 shared task on joint parsing of syntactic and semantic dependencies",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of CoNLL-2008",
"volume": "",
"issue": "",
"pages": "159--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu\u00eds M\u00e0rquez, and Joakim Nivre. 2008. The conll- 2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of CoNLL- 2008. pages 159-177.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Graph theory",
"authors": [
{
"first": "William",
"middle": [],
"last": "Thomas Tutte",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Thomas Tutte. 1984. Graph theory, vol- ume 11. Addison-Wesley Menlo Park.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Graph-based dependency parsing with bidirectional lstm",
"authors": [
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL-2016",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional lstm. In Pro- ceedings of ACL-2016 (Volume 1: Long Papers).",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "A novel dependency-to-string model for statistical machine translation",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP-2011. Edinburgh",
"volume": "",
"issue": "",
"pages": "216--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Xie, Haitao Mi, and Qun Liu. 2011. A novel dependency-to-string model for statistical machine translation. In Proceedings of EMNLP-2011. Edin- burgh, Scotland, UK., pages 216-226.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Building a large-scale annotated chinese corpus",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Fu-Dong",
"middle": [],
"last": "Chiou",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of COLING-2002",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Fu-Dong Chiou, and Martha Palmer. 2002. Building a large-scale annotated chinese cor- pus. In Proceedings of COLING-2002. pages 1-8.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Adadelta: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1212.5701"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D Zeiler. 2012. Adadelta: an adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701 .",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Enforcing structural diversity in cube-pruned dependency parsing",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL-2014",
"volume": "2",
"issue": "",
"pages": "656--661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang and Ryan McDonald. 2014. Enforcing structural diversity in cube-pruned dependency pars- ing. In Proceedings of ACL-2014 (Volume 2: Short Papers). Baltimore, Maryland, pages 656-661.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Dependency parsing as head selection",
"authors": [
{
"first": "Xingxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01280"
]
},
"num": null,
"urls": [],
"raw_text": "Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2016. Dependency parsing as head selection. arXiv preprint arXiv:1606.01280 .",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Greed is good if randomized: New inference for dependency parsing",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP-2014",
"volume": "",
"issue": "",
"pages": "1013--1024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Zhang, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2014. Greed is good if randomized: New inference for dependency parsing. In Proceedings of EMNLP-2014. Doha, Qatar, pages 1013-1024.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "An example labeled dependency tree."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The convolution neural network for extracting character-level representations of words. Dashed arrows indicate a dropout layer applied before character embeddings are input to CNN."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "5: UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers. \"Best Published\" includes the most accurate parsers in term of UAS among Koo et al. (2010), Martins et al. (2011), Martins et al. (2013), Lei et al. (2014), Zhang et al. (2014), Zhang and McDonald (2014), Pitler and McDonald (2015), Ma and Hovy (2015), and Cheng et al. (2016)."
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "2.2.2 Bi-directional LSTM LSTM Unit. Recurrent neural networks (RNNs) are a powerful family of connectionist models that have been widely applied in NLP tasks, such as language modeling"
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">English</td><td/><td/><td colspan=\"2\">Chinese</td><td/><td/><td colspan=\"2\">German</td><td/></tr><tr><td/><td>Dev</td><td/><td colspan=\"2\">Test</td><td>Dev</td><td/><td colspan=\"2\">Test</td><td>Dev</td><td/><td colspan=\"2\">Test</td></tr><tr><td>Model</td><td>UAS</td><td>LAS</td><td>UAS</td><td>LAS</td><td>UAS</td><td>LAS</td><td>UAS</td><td>LAS</td><td>UAS</td><td>LAS</td><td>UAS</td><td>LAS</td></tr><tr><td>Basic</td><td>94.51</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"text": "92.23 94.62 92.54 84.33 81.65 84.35 81.63 90.46 87.77 90.69 88.42 +Char 94.74 92.55 94.73 92.75 85.07 82.63 85.24 82.46 92.16 89.82 92.24 90.18 +POS 94.71 92.60 94.83 92.96 88.98 87.55 89.05 87.74 91.94 89.51 92.19 90.05 Full 94.77 92.66 94.88 92.98 88.51 87.16 88.79 87.47 92.37 90.09 92.58 90.54"
},
"TABREF4": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Parsing performance (UAS and LAS) of different versions of our model on both the development and test sets for three languages."
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Parsing performance on PTB with different training objective functions."
},
"TABREF7": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>English</td><td>Chinese</td><td>German</td></tr></table>",
"text": "+Char model obtains better performance than the Basic model on all the three languages, showing that character-level representations are important for dependency parsing. Second, on English and German, +Char and +POS achieves comparable performance, while on Chinese +POS significantly outperforms +Char model. Finally, the Full model achieves the best accuracy on English and German, but on Chinese +POS obtains the best. Thus, we guess that the POS information is more useful"
},
"TABREF8": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "UAS and LAS of four versions of our model on test sets for three languages, together with top-performance parsing systems."
},
"TABREF9": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">Turbo Tensor RGB</td><td>In-Out</td><td>Bi-Att</td><td>+POS</td><td>Full</td><td colspan=\"2\">Best Published</td></tr><tr><td/><td>UAS</td><td>UAS</td><td>UAS</td><td>UAS [LAS]</td><td>UAS [LAS]</td><td>UAS [LAS]</td><td>UAS [LAS]</td><td>UAS</td><td>LAS</td></tr><tr><td>ar</td><td>79.64</td><td>79.95</td><td colspan=\"5\">80.24 60]</td><td>94.02</td><td>-</td></tr><tr><td>zh</td><td>89.98</td><td>92.68</td><td colspan=\"2\">93.04 92.58 [88.51]</td><td>-</td><td colspan=\"2\">93.44 [90.04] 93.40 [90.10]</td><td>93.04</td><td>-</td></tr><tr><td>cs</td><td>90.32</td><td>90.50</td><td colspan=\"5\">90.77 82]</td><td>87.39</td><td>-</td></tr><tr><td>en</td><td>93.22</td><td>93.02</td><td colspan=\"2\">93.25 92.45 [89.43]</td><td>-</td><td colspan=\"2\">94.43 [92.31] 94.66 [92.52]</td><td>93.25</td><td>-</td></tr><tr><td>de</td><td>92.41</td><td>91.97</td><td colspan=\"5\">92.67 90.79 [87.74] 92.71 [89.80] 93.53 [91.55] 93.62 [91.90]</td><td>92.71</td><td>89.80</td></tr><tr><td>ja</td><td>93.52</td><td>93.71</td><td colspan=\"5\">93.56 93.54 [91.80] 93.44 [90.67] 93.82 [92.34] 94.02 [92.60]</td><td>93.80</td><td>-</td></tr><tr><td>pt</td><td>92.69</td><td>91.92</td><td colspan=\"5\">92.36 91.54 [87.68] 92.77 [88.44] 92.59 [89.12] 92.71 [88.92]</td><td>93.03</td><td>-</td></tr><tr><td>sl</td><td>86.01</td><td>86.24</td><td colspan=\"5\">86.72 84.39 [73.74] 86.01 [75.90] 85.73 [76.48] 86.73 [77.56]</td><td>87.06</td><td>-</td></tr><tr><td>es</td><td>85.59</td><td>88.00</td><td colspan=\"5\">88.75 86.44 [83.29] 88.74 [84.03] 88.58 [85.03] 89.20 [85.77]</td><td>88.75</td><td>84.03</td></tr><tr><td>sv</td><td>91.14</td><td>91.00</td><td colspan=\"5\">91.08 89.94 [83.09] 90.50 [84.05] 90.89 [86.58] 91.22 [86.92]</td><td>91.85</td><td>85.26</td></tr><tr><td>tr</td><td>76.90</td><td>76.84</td><td colspan=\"5\">76.68 75.32 [60.39] 78.43 [66.16] 75.88 [61.72] 77.71 [65.81]</td><td>78.43</td><td>66.16</td></tr><tr><td>av</td><td>88.73</td><td>89.08</td><td colspan=\"2\">89.44 88.08 [81.84]</td><td>-</td><td colspan=\"2\">89.47 [84.24] 89.95 [84.99]</td><td>89.83</td><td>-</td></tr></table>",
"text": "79.60 [67.09] 80.34 [68.58] 80.05 [67.80] 80.80 [69.40] 81.12 bg 93.10 93.50 93.72 92.68 [87.79] 93.96 [89.55] 93.66 [89.79] 94.28 [90.88.01 [79.31] 91.16 [85.14] 91.04 [85.82] 91.18 [85.92] 91.16 85.14 da 91.48 91.39 91.86 91.44 [85.55] 91.56 [85.53] 91.52 [86.57] 91.86 [87.07] 92.00 nl 86.19 86.41 87.39 84.45 [80.31] 87.15 [82.41] 87.41 [84.17] 87.85 [84."
},
"TABREF10": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": ""
}
}
}
}