{ "paper_id": "N06-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:45:25.637203Z" }, "title": "Partial Training for a Lexicalized-Grammar Parser", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "", "affiliation": { "laboratory": "", "institution": "Oxford University Computing Laboratory Wolfson Building", "location": { "addrLine": "Parks Road Oxford", "postCode": "OX1 3QD", "country": "UK" } }, "email": "stephen.clark@comlab.ox.ac.uk" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sydney", "location": { "postCode": "2006", "region": "NSW", "country": "Australia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a solution to the annotation bottleneck for statistical parsing, by exploiting the lexicalized nature of Combinatory Categorial Grammar (CCG). The parsing model uses predicate-argument dependencies for training, which are derived from sequences of CCG lexical categories rather than full derivations. A simple method is used for extracting dependencies from lexical category sequences, resulting in high precision, yet incomplete and noisy data. The dependency parsing model of Clark and Curran (2004b) is extended to exploit this partial training data. Remarkably, the accuracy of the parser trained on data derived from category sequences alone is only 1.3% worse in terms of F-score than the parser trained on complete dependency structures.", "pdf_parse": { "paper_id": "N06-1019", "_pdf_hash": "", "abstract": [ { "text": "We propose a solution to the annotation bottleneck for statistical parsing, by exploiting the lexicalized nature of Combinatory Categorial Grammar (CCG). The parsing model uses predicate-argument dependencies for training, which are derived from sequences of CCG lexical categories rather than full derivations. A simple method is used for extracting dependencies from lexical category sequences, resulting in high precision, yet incomplete and noisy data. The dependency parsing model of Clark and Curran (2004b) is extended to exploit this partial training data. Remarkably, the accuracy of the parser trained on data derived from category sequences alone is only 1.3% worse in terms of F-score than the parser trained on complete dependency structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "State-of-the-art statistical parsers require large amounts of hand-annotated training data, and are typically based on the Penn Treebank, the largest treebank available for English. Even robust parsers using linguistically sophisticated formalisms, such as TAG (Chiang, 2000) , CCG (Clark and Curran, 2004b; Hockenmaier, 2003) , HPSG (Miyao et al., 2004) and LFG (Riezler et al., 2002; Cahill et al., 2004) , often use training data derived from the Penn Treebank. The labour-intensive nature of the treebank development process, which can take many years, creates a significant barrier for the development of parsers for new domains and languages.", "cite_spans": [ { "start": 261, "end": 275, "text": "(Chiang, 2000)", "ref_id": "BIBREF2" }, { "start": 282, "end": 307, "text": "(Clark and Curran, 2004b;", "ref_id": "BIBREF5" }, { "start": 308, "end": 326, "text": "Hockenmaier, 2003)", "ref_id": "BIBREF9" }, { "start": 334, "end": 354, "text": "(Miyao et al., 2004)", "ref_id": "BIBREF14" }, { "start": 363, "end": 385, "text": "(Riezler et al., 2002;", "ref_id": "BIBREF17" }, { "start": 386, "end": 406, "text": "Cahill et al., 2004)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work has attempted parser adaptation without relying on treebank data from the new domain (Steedman et al., 2003; Lease and Charniak, 2005) . In this paper we propose the use of annotated data in the new domain, but only partially annotated data, which reduces the annotation effort required (Hwa, 1999) . We develop a parsing model which can be trained using partial data, by exploiting the properties of lexicalized grammar formalisms. The formalism we use is Combinatory Categorial Grammar (Steedman, 2000) , together with a parsing model described in Clark and Curran (2004b) which we adapt for use with partial data.", "cite_spans": [ { "start": 99, "end": 122, "text": "(Steedman et al., 2003;", "ref_id": "BIBREF18" }, { "start": 123, "end": 148, "text": "Lease and Charniak, 2005)", "ref_id": "BIBREF11" }, { "start": 301, "end": 312, "text": "(Hwa, 1999)", "ref_id": "BIBREF10" }, { "start": 502, "end": 518, "text": "(Steedman, 2000)", "ref_id": "BIBREF19" }, { "start": 564, "end": 588, "text": "Clark and Curran (2004b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Parsing with Combinatory Categorial Grammar (CCG) takes place in two stages: first, CCG lexical categories are assigned to the words in the sentence, and then the categories are combined by the parser (Clark and Curran, 2004a) . The lexical categories can be thought of as detailed part of speech tags and typically express subcategorization information. We exploit the fact that CCG lexical categories contain a lot of syntactic information, and can therefore be used for training a full parser, even though attachment information is not explicitly represented in a category sequence. Our partial training regime only requires sentences to be annotated with lexical categories, rather than full parse trees; therefore the data can be produced much more quickly for a new domain or language (Clark et al., 2004) .", "cite_spans": [ { "start": 201, "end": 226, "text": "(Clark and Curran, 2004a)", "ref_id": "BIBREF4" }, { "start": 791, "end": 811, "text": "(Clark et al., 2004)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The partial training method uses the log-linear dependency model described in Clark and Curran (2004b) , which uses sets of predicate-argument de-pendencies, rather than derivations, for training. Our novel idea is that, since there is so much information in the lexical category sequence, most of the correct dependencies can be easily inferred from the categories alone. More specifically, for a given sentence and lexical category sequence, we train on those predicate-argument dependencies which occur in k% of the derivations licenced by the lexical categories. By setting the k parameter high, we can produce a set of high precision dependencies for training. A similar idea is proposed by Carroll and Briscoe (2002) for producing high precision data for lexical acquisition.", "cite_spans": [ { "start": 78, "end": 102, "text": "Clark and Curran (2004b)", "ref_id": "BIBREF5" }, { "start": 696, "end": 722, "text": "Carroll and Briscoe (2002)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Using this procedure we are able to produce dependency data with over 99% precision and, remarkably, up to 86% recall, when compared against the complete gold-standard dependency data. The high recall figure results from the significant amount of syntactic information in the lexical categories, which reduces the ambiguity in the possible dependency structures. Since the recall is not 100%, we require a log-linear training method which works with partial data. Riezler et al. (2002) describe a partial training method for a log-linear LFG parsing model in which the \"correct\" LFG derivations for a sentence are those consistent with the less detailed gold standard derivation from the Penn Treebank. We use a similar method here by treating a CCG derivation as correct if it is consistent with the highprecision partial dependency structure. Section 3 explains what we mean by consistency in this context. Surprisingly, the accuracy of the parser trained on partial data approaches that of the parser trained on full data: our best partial-data model is only 1.3% worse in terms of dependency F-score than the fulldata model, despite the fact that the partial data does not contain any explicit attachment information.", "cite_spans": [ { "start": 464, "end": 485, "text": "Riezler et al. (2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 The CCG Parsing Model Clark and Curran (2004b) describes two log-linear parsing models for CCG: a normal-form derivation model and a dependency model. In this paper we use the dependency model, which requires sets of predicate-argument dependencies for training. 1 1 Hockenmaier and Steedman (2002) describe a generative model of normal-form derivations; one possibility for training this model on partial data, which has not been explored, is to use the EM algorithm (Pereira and Schabes, 1992) .", "cite_spans": [ { "start": 24, "end": 48, "text": "Clark and Curran (2004b)", "ref_id": "BIBREF5" }, { "start": 269, "end": 300, "text": "Hockenmaier and Steedman (2002)", "ref_id": "BIBREF8" }, { "start": 470, "end": 497, "text": "(Pereira and Schabes, 1992)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The predicate-argument dependencies are represented as 5-tuples: h f , f, s, h a , l , where h f is the lexical item of the lexical category expressing the dependency relation; f is the lexical category; s is the argument slot; h a is the head word of the argument; and l encodes whether the dependency is non-local. For example, the dependency encoding company as the object of bought (as in IBM bought the company) is represented as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "bought 2 , (S \\NP 1 )/NP 2 , 2, company 4 , \u2212 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "CCG dependency structures are sets of predicateargument dependencies. We define the probability of a dependency structure as the sum of the probabilities of all those derivations leading to that structure (Clark and Curran, 2004b) . \"Spurious ambiguity\" in CCG means that there can be more than one derivation leading to any one dependency structure. Thus, the probability of a dependency structure, \u03c0, given a sentence, S, is defined as follows:", "cite_spans": [ { "start": 205, "end": 230, "text": "(Clark and Curran, 2004b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (\u03c0|S) = d\u2208\u2206(\u03c0) P (d, \u03c0|S)", "eq_num": "(2)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "where \u2206(\u03c0) is the set of derivations which lead to \u03c0. The probability of a d, \u03c0 pair, \u03c9, conditional on a sentence S, is defined using a log-linear form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "P (\u03c9|S) = 1 Z S e \u03bb.f (\u03c9) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u03bb.f (\u03c9) = i \u03bb i f i (\u03c9).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The function f i is the integer-valued frequency function of the ith feature; \u03bb i is the weight of the ith feature; and Z S is a normalising constant. Clark and Curran (2004b) describes the training procedure for the dependency model, which uses a discriminative estimation method by maximising the conditional likelihood of the model given the data (Riezler et al., 2002) . The optimisation of the objective function is performed using the limited-memory BFGS numerical optimisation algorithm (Nocedal and Wright, 1999; Malouf, 2002) , which requires calculation of the objective function and the gradient of the objective function at each iteration.", "cite_spans": [ { "start": 151, "end": 175, "text": "Clark and Curran (2004b)", "ref_id": "BIBREF5" }, { "start": 350, "end": 372, "text": "(Riezler et al., 2002)", "ref_id": "BIBREF17" }, { "start": 494, "end": 520, "text": "(Nocedal and Wright, 1999;", "ref_id": "BIBREF15" }, { "start": 521, "end": 534, "text": "Malouf, 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The objective function is defined below, where L(\u039b) is the likelihood and G(\u039b) is a Gaussian prior term for smoothing. He anticipates growth for the auto maker", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "NP (S [dcl ]\\NP )/NP NP (NP \\NP )/NP NP [nb]/N N /N N Figure 1: Example sentence with CCG lexical categories L (\u039b) = L(\u039b) \u2212 G(\u039b) (4) = m j=1 log d\u2208\u2206(\u03c0 j ) e \u03bb.f (d,\u03c0 j ) \u2212 m j=1 log \u03c9\u2208\u03c1(S j ) e \u03bb.f (\u03c9) \u2212 n i=1 \u03bb 2 i 2\u03c3 2 S 1 , .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": ". . , S m are the sentences in the training data; \u03c0 1 , . . . , \u03c0 m are the corresponding gold-standard dependency structures; \u03c1(S) is the set of possible derivation, dependency-structure pairs for S; \u03c3 is a smoothing parameter; and n is the number of features. The components of the gradient vector are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2202L (\u039b) \u2202\u03bb i = m j=1 d\u2208\u2206(\u03c0 j ) e \u03bb.f (d,\u03c0 j ) f i (d, \u03c0 j ) d\u2208\u2206(\u03c0 j ) e \u03bb.f (d,\u03c0 j ) (5) \u2212 m j=1 \u03c9\u2208\u03c1(S j ) e \u03bb.f (\u03c9) f i (\u03c9) \u03c9\u2208\u03c1(S j ) e \u03bb.f (\u03c9) \u2212 \u03bb i \u03c3 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The first two terms of the gradient are expectations of feature f i : the first expectation is over all derivations leading to each gold-standard dependency structure, and the second is over all derivations for each sentence in the training data. The estimation process attempts to make the expectations in (5) equal (ignoring the Gaussian prior term). Another way to think of the estimation process is that it attempts to put as much mass as possible on the derivations leading to the gold-standard structures (Riezler et al., 2002) . Calculation of the feature expectations requires summing over all derivations for a sentence, and summing over all derivations leading to a goldstandard dependency structure. Clark and Curran (2003) shows how the sum over the complete derivation space can be performed efficiently using a packed chart and the inside-outside algorithm, and Clark and Curran (2004b) extends this method to sum over all derivations leading to a gold-standard dependency structure.", "cite_spans": [ { "start": 511, "end": 533, "text": "(Riezler et al., 2002)", "ref_id": "BIBREF17" }, { "start": 711, "end": 734, "text": "Clark and Curran (2003)", "ref_id": "BIBREF3" }, { "start": 876, "end": 900, "text": "Clark and Curran (2004b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The partial data we use for training the dependency model is derived from CCG lexical category sequences only. Figure 1 gives an example sentence adapted from CCGbank (Hockenmaier, 2003) together with its lexical category sequence. Note that, although the attachment of the prepositional phrase to the noun phrase is not explicitly represented, it can be inferred in this example because the lexical category assigned to the preposition has to combine with a noun phrase to the left, and in this example there is only one possibility. One of the key insights in this paper is that the significant amount of syntactic information in CCG lexical categories allows us to infer attachment information in many cases.", "cite_spans": [ { "start": 167, "end": 186, "text": "(Hockenmaier, 2003)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 111, "end": 119, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "The procedure we use for extracting dependencies from a sequence of lexical categories is to return all those dependencies which occur in k% of the derivations licenced by the categories. By giving the k parameter a high value, we can extract sets of dependencies with very high precision; in fact, assuming that the correct lexical category sequence licences the correct derivation, setting k to 100 must result in 100% precision, since any dependency which occurs in every derivation must occur in the correct derivation. Of course the recall is not guaranteed to be high; decreasing k has the effect of increasing recall, but at the cost of decreasing precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "The training method described in Section 2 can be adapted to use the (potentially incomplete) sets of dependencies returned by our extraction procedure. In Section 2 a derivation was considered correct if it produced the complete set of gold-standard dependencies. In our partial-data version a derivation is considered correct if it produces dependencies which are consistent with the dependencies returned by our extraction procedure. We define consistency as follows: a set of dependencies D is consistent with a set G if G is a subset of D. We also say that a derivation d is consistent with dependency set G if G is a subset of the dependencies produced by d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "This definition of \"correct derivation\" will introduce some noise into the training data. Noise arises from sentences where the recall of the extracted dependencies is less than 100%, since some of the derivations which are consistent with the extracted dependencies for such sentences will be incorrect. Noise also arises from sentences where the precision of the extracted dependencies is less than 100%, since for these sentences every derivation which is consistent with the extracted dependencies will be incorrect. The hope is that, if an incorrect derivation produces mostly correct dependencies, then it can still be useful for training. Section 4 shows how the precision and recall of the extracted dependencies varies with k and how this affects parsing accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "The definitions of the objective function (4) and the gradient (5) for training remain the same in the partial-data case; the only differences are that \u2206(\u03c0) is now defined to be those derivations which are consistent with the partial dependency structure \u03c0, and the gold-standard dependency structures \u03c0 j are the partial structures extracted from the gold-standard lexical category sequences. 2 Clark and Curran 2004bgives an algorithm for finding all derivations in a packed chart which produce a particular set of dependencies. This algorithm is required for calculating the value of the objective function (4) and the first feature expectation in (5). We adapt this algorithm for finding all derivations which are consistent with a partial dependency structure. The new algorithm is shown in Figure 2 .", "cite_spans": [ { "start": 394, "end": 395, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 796, "end": 804, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "The algorithm relies on the definition of a packed chart, which is an instance of a feature forest (Miyao and Tsujii, 2002) . The idea behind a packed chart is that equivalent chart entries of the same type and in the same cell are grouped together, and back pointers to the daughters indicate how an individual entry was created. Equivalent entries form the same structures in any subsequent parsing.", "cite_spans": [ { "start": 99, "end": 123, "text": "(Miyao and Tsujii, 2002)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "A feature forest is defined in terms of disjunctive and conjunctive nodes. For a packed chart, the individual entries in a cell are conjunctive nodes, and the equivalence classes of entries are disjunctive nodes. The definition of a feature forest is as follows: A feature forest \u03a6 is a tuple C, D, R, \u03b3, \u03b4 where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "2 Note that the procedure does return all the gold-standard dependencies for some sentences. C, D, R, \u03b3, \u03b4 is a packed chart / feature forest G is a set of dependencies returned by the extraction procedure Let c be a conjunctive node Let d be a disjunctive node deps(c) is the set of dependencies on node c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "cdeps(c) = |deps(c) \u2229 G| dmax(c) = d\u2208\u03b4(c) dmax(d) + cdeps(c) dmax(d) = max{dmax(c) | c \u2208 \u03b3(d)} mark(d): mark d as a correct node foreach c \u2208 \u03b3(d) if dmax(c) == dmax(d) mark c as a correct node foreach d \u2208 \u03b4(c) mark(d ) foreach dr \u2208 R such that dmax . (dr) = |G| mark(dr)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "Figure 2: Finding nodes in derivations consistent with a partial dependency structure", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "\u2022 C is a set of conjunctive nodes;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "\u2022 D is a set of disjunctive nodes;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "\u2022 R \u2286 D is a set of root disjunctive nodes;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "\u2022 \u03b3 : D \u2192 2 C is a conjunctive daughter function;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "\u2022 \u03b4 : C \u2192 2 D is a disjunctive daughter function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "Dependencies are associated with conjunctive nodes in the feature forest. For example, if the disjunctive nodes (equivalence classes of individual entries) representing the categories NP and S \\NP combine to produce a conjunctive node S , the resulting S node will have a verb-subject dependency associated with it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "In Figure 2 , cdeps(c) is the number of dependencies on conjunctive node c which appear in partial structure G; dmax(c) is the maximum number of dependencies in G produced by any sub-derivation headed by c; dmax(d) is the same value for disjunctive node d. Recursive definitions for calculating these values are given; the base case occurs when conjunctive nodes have no disjunctive daughters.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "The algorithm identifies all those root nodes heading derivations which are consistent with the partial dependency structure G, and traverses the chart topdown marking the nodes in those derivations. The insight behind the algorithm is that, for two conjunctive nodes in the same equivalence class, if one node heads a sub-derivation producing more dependencies in G than the other node, then the node with less dependencies in G cannot be part of a derivation consistent with G.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "The conjunctive and disjunctive nodes appearing in derivations consistent with G form a new \"goldstandard\" feature forest. The gold-standard forest, and the complete forest containing all derivations spanning the sentence, can be used to estimate the likelihood value and feature expectations required by the estimation algorithm. Let E \u03a6 \u039b f i be the expected value of f i over the forest \u03a6 for model \u039b; then the values in (5) can be obtained by calculating E \u03a6 j \u039b f i for the complete forest \u03a6 j for each sentence S j in the training data (the second sum in (5)), and also E \u03a8 j \u039b f i for each forest \u03a8 j of derivations consistent with the partial gold-standard dependency structure for sentence S j (the first sum in (5)):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2202L(\u039b) \u2202\u03bb i = m j=1 (E \u03a8 j \u039b f i \u2212 E \u03a6 j \u039b f i )", "eq_num": "(6)" } ], "section": "Partial Training", "sec_num": "3" }, { "text": "The likelihood in (4) can be calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(\u039b) = m j=1 (log Z \u03a8 j \u2212 log Z \u03a6 j )", "eq_num": "(7)" } ], "section": "Partial Training", "sec_num": "3" }, { "text": "where log Z \u03a6 is the normalisation constant for \u03a6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Training", "sec_num": "3" }, { "text": "The resource used for the experiments is CCGbank (Hockenmaier, 2003) , which consists of normalform CCG derivations derived from the phrasestructure trees in the Penn Treebank. It also contains predicate-argument dependencies which we use for development and final evaluation.", "cite_spans": [ { "start": 49, "end": 68, "text": "(Hockenmaier, 2003)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Sections 2-21 of CCGbank were used to investigate the accuracy of the partial dependency structures returned by the extraction procedure. Full, correct dependency structures for the sentences in 2-21 were created by running our CCG parser (Clark and Curran, 2004b ) over the gold-standard derivation for each sentence, outputting the dependencies. This resulted in full dependency structures for 37,283 of the sentences in sections 2-21. Table 1 gives precision and recall values for the dependencies obtained from the extraction procedure, for the 37,283 sentences for which we have The derivations licenced by a lexical category sequence were created using the CCG parser described in Clark and Curran (2004b) . The parser uses a small number of combinatory rules to combine the categories, along with the CKY chart-parsing algorithm described in Steedman (2000) . It also uses some unary type-changing rules and punctuation rules obtained from the derivations in CCGbank. 3 The parser builds a packed representation, and counting the number of derivations in which a dependency occurs can be performed using a dynamic programming algorithm similar to the inside-outside algorithm. Table 1 shows that, by varying the value of k, it is possible to get the recall of the extracted dependencies as high as 85.9%, while still maintaining a precision value of over 99%.", "cite_spans": [ { "start": 239, "end": 263, "text": "(Clark and Curran, 2004b", "ref_id": "BIBREF5" }, { "start": 687, "end": 711, "text": "Clark and Curran (2004b)", "ref_id": "BIBREF5" }, { "start": 849, "end": 864, "text": "Steedman (2000)", "ref_id": "BIBREF19" }, { "start": 975, "end": 976, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 438, "end": 445, "text": "Table 1", "ref_id": null }, { "start": 1184, "end": 1191, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Accuracy of Dependency Extraction", "sec_num": "4.1" }, { "text": "The training data for the dependency model was created by first supertagging the sentences in sections 2-21, using the supertagger described in Clark and Curran (2004b) . 4 The average number of categories assigned to each word is determined by a parameter, \u03b2, in the supertagger. A category is assigned to a word if the category's probability is within \u03b2 of the highest probability category for that word.", "cite_spans": [ { "start": 144, "end": 168, "text": "Clark and Curran (2004b)", "ref_id": "BIBREF5" }, { "start": 171, "end": 172, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Accuracy of the Parser", "sec_num": "4.2" }, { "text": "For these experiments, we used a \u03b2 value of 0.01, which assigns roughly 1.6 categories to each word, on average; we also ensured that the correct lexical category was in the set assigned to each word. (We did not do this when parsing the test data.) For some sentences, the packed charts can become very large. The supertagging approach we adopt for training differs to that used for testing: if the size of the chart exceeds some threshold, the value of \u03b2 is increased, reducing ambiguity, and the sentence is supertagged and parsed again. The threshold which limits the size of the charts was set at 300 000 individual entries. Two further values of \u03b2 were used: 0.05 and 0.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accuracy of the Parser", "sec_num": "4.2" }, { "text": "Packed charts were created for each sentence and stored in memory. It is essential that the packed charts for each sentence contain at least one derivation leading to the gold-standard dependency structure. Not all rule instantiations in CCGbank can be produced by our parser; hence it is not possible to produce the gold standard for every sentence in Sections 2-21. For the full-data model we used 34 336 sentences (86.7% of the total). For the partial-data models we were able to use slightly more, since the partial structures are easier to produce. Here we used 35,709 sentences (k = 0.85).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accuracy of the Parser", "sec_num": "4.2" }, { "text": "Since some of the packed charts are very large, we used an 18-node Beowulf cluster, together with a parallel version of the BFGS training algorithm. The training time and number of iterations to convergence were 172 minutes and 997 iterations for the full-data model, and 151 minutes and 861 iterations for the partial-data model (k = 0.85). Approximate memory usage in each case was 17.6 GB of RAM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accuracy of the Parser", "sec_num": "4.2" }, { "text": "The dependency model uses the same set of features described in Clark and Curran (2004b) : dependency features representing predicate-argument dependencies (with and without distance measures); rule instantiation features encoding the combining categories together with the result category (with and without a lexical head); lexical category features, consisting of word-category pairs at the leaf nodes; and root category features, consisting of headword-category pairs at the root nodes. Further Only features which occur more than once in the training data are included, except that the cutoff for the rule features is 10 or more and the counting is performed across all derivations licenced by the gold-standard lexical category sequences. The larger cutoff was used since the productivity of the grammar can lead to large numbers of these features. The dependency model has 548 590 features. In order to provide a fair comparison, the same feature set was used for the partial-data and full-data models.", "cite_spans": [ { "start": 64, "end": 88, "text": "Clark and Curran (2004b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Accuracy of the Parser", "sec_num": "4.2" }, { "text": "The CCG parsing consists of two phases: first the supertagger assigns the most probable categories to each word, and then the small number of combinatory rules, plus the type-changing and punctuation rules, are used with the CKY algorithm to build a packed chart. 5 We use the method described in Clark and Curran (2004b) for integrating the supertagger with the parser: initially a small number of categories is assigned to each word, and more categories are requested if the parser cannot find a spanning analysis. The \"maximum-recall\" algorithm described in Clark and Curran (2004b) is used to find the highest scoring dependency structure. Table 2 gives the accuracy of the parser on Section 00 of CCGbank, evaluated against the predicateargument dependencies in CCGbank. 6 The table gives labelled precision, labelled recall and F-score, and lexical category accuracy. Numbers are given for the partial-data model with various values of k, and for the full-data model, which provides an up-5 Gold-standard POS tags from CCGbank were used for all the experiments in this paper.", "cite_spans": [ { "start": 264, "end": 265, "text": "5", "ref_id": null }, { "start": 297, "end": 321, "text": "Clark and Curran (2004b)", "ref_id": "BIBREF5" }, { "start": 561, "end": 585, "text": "Clark and Curran (2004b)", "ref_id": "BIBREF5" }, { "start": 776, "end": 777, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 644, "end": 651, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Accuracy of the Parser", "sec_num": "4.2" }, { "text": "There are some dependency types produced by our parser which are not in CCGbank; these were ignored for evaluation. Table 4 : Accuracy of the Partial Dependency Data using Inside-Outside Scores per bound for the partial-data model. We also give a lower bound which we obtain by randomly traversing a packed chart top-down, giving equal probability to each conjunctive node in an equivalence class. The precision and recall figures are over those sentences for which the parser returned an analysis (99.27% of Section 00). The best result is obtained for a k value of 0.85, which produces partial dependency data with a precision of 99.7 and a recall of 81.3. Interestingly, the results show that decreasing k further, which results in partial data with a higher recall and only a slight loss in precison, harms the accuracy of the parser. The Random result also dispels any suspicion that the partial-model is performing well simply because of the supertagger; clearly there is still much work to be done after the supertagging phase. Table 3 gives the accuracy of the parser on Section 23, using the best performing partial-data model on Section 00. The precision and recall figures are over those sentences for which the parser returned an analysis (99.63% of Section 23). The results show that the partial-data model is only 1.3% Fscore short of the upper bound.", "cite_spans": [], "ref_spans": [ { "start": 116, "end": 123, "text": "Table 4", "ref_id": null }, { "start": 1035, "end": 1042, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "In a final experiment, we attempted to exploit the high accuracy of the partial-data model by using it to provide new training data. For each sentence in Section 2-21, we parsed the gold-standard lexical category sequences and used the best performing partial-data model to assign scores to each dependency in the packed chart. The score for a dependency was the sum of the probabilities of all derivations producing that dependency, which can be calculated using the inside-outside algorithm. (This is the score used by the maximum-recall parsing algorithm.) Partial dependency structures were then created by returning all dependencies whose score was above some threshold k, as before. Table 4 gives the accuracy of the data created by this procedure. Note how these values differ to those reported in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 689, "end": 696, "text": "Table 4", "ref_id": null }, { "start": 805, "end": 812, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Further Experiments with Inside-Outside", "sec_num": "4.3" }, { "text": "We then trained the dependency model on this partial data using the same method as before. However, the peformance of the parser on Section 00 using these new models was below that of the previous best performing partial-data model for all values of k. We report this negative result because we had hypothesised that using a probability model to score the dependencies, rather than simply the number of derivations in which they occur, would lead to improved performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further Experiments with Inside-Outside", "sec_num": "4.3" }, { "text": "Our main result is that it is possible to train a CCG dependency model from lexical category sequences alone and still obtain parsing results which are only 1.3% worse in terms of labelled F-score than a model trained on complete data. This is a noteworthy result and demonstrates the significant amount of information encoded in CCG lexical categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "The engineering implication is that, since the dependency model can be trained without annotating recursive structures, and only needs sequence information at the word level, then it can be ported rapidly to a new domain (or language) by annotating new sequence data in that domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "One possible response to this argument is that, since the lexical category sequence contains so much syntactic information, then the task of annotating category sequences must be almost as labour intensive as annotating full derivations. To test this hypothesis fully would require suitable annotation tools and subjects skilled in CCG annotation, which we do not currently have access to.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "However, there is some evidence that annotating category sequences can be done very efficiently. Clark et al. (2004) describes a porting experiment in which a CCG parser is adapted for the question domain. The supertagger component of the parser is trained on questions annotated at the lexical category level only. The training data consists of over 1,000 annotated questions which took less than a week to create. This suggests, as a very rough approximation, that 4 annotators could annotate 40,000 sentences with lexical categories (the size of the Penn Treebank) in a few months.", "cite_spans": [ { "start": 97, "end": 116, "text": "Clark et al. (2004)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Another advantage of annotating with lexical categories is that a CCG supertagger can be used to perform most of the annotation, with the human annotator only required to correct the mistakes made by the supertagger. An accurate supertagger can be bootstrapped quicky, leaving only a small number of corrections for the annotator. A similar procedure is suggested by Doran et al. (1997) for porting an LTAG grammar to a new domain.", "cite_spans": [ { "start": 367, "end": 386, "text": "Doran et al. (1997)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We have a proposed a novel solution to the annotation bottleneck for statistical parsing which exploits the lexicalized nature of CCG, and may therefore be applicable to other lexicalized grammar formalisms such as LTAG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Since our training method is intended to be applicable in the absence of derivation data, the use of such rules may appear suspect. However, we argue that the type-changing and punctuation rules could be manually created for a new domain by examining the lexical category data.4An improved version of the supertagger was used for this paper in which the forward-backward algorithm is used to calculate the lexical category probability distributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Long-distance dependency resolution in automatically acquired wide-coverage PCFG-based LFG approximations", "authors": [ { "first": "A", "middle": [], "last": "Cahill", "suffix": "" }, { "first": "M", "middle": [], "last": "Burke", "suffix": "" }, { "first": "R", "middle": [], "last": "O'donovan", "suffix": "" }, { "first": "J", "middle": [], "last": "Van Genabith", "suffix": "" }, { "first": "A", "middle": [], "last": "Way", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the ACL", "volume": "", "issue": "", "pages": "320--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Cahill, M. Burke, R. O'Donovan, J. van Genabith, and A. Way. 2004. Long-distance dependency resolution in au- tomatically acquired wide-coverage PCFG-based LFG ap- proximations. In Proceedings of the 42nd Meeting of the ACL, pages 320-327, Barcelona, Spain.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "High precision extraction of grammatical relations", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "134--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Carroll and Ted Briscoe. 2002. High precision extrac- tion of grammatical relations. In Proceedings of the 19th In- ternational Conference on Computational Linguistics, pages 134-140, Taipei, Taiwan.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Statistical parsing with an automaticallyextracted Tree Adjoining Grammar", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Meeting of the ACL", "volume": "", "issue": "", "pages": "456--463", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2000. Statistical parsing with an automatically- extracted Tree Adjoining Grammar. In Proceedings of the 38th Meeting of the ACL, pages 456-463, Hong Kong.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Log-linear models for wide-coverage CCG parsing", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the EMNLP Conference", "volume": "", "issue": "", "pages": "97--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark and James R. Curran. 2003. Log-linear mod- els for wide-coverage CCG parsing. In Proceedings of the EMNLP Conference, pages 97-104, Sapporo, Japan.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The importance of supertagging for wide-coverage CCG parsing", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING-04", "volume": "", "issue": "", "pages": "282--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark and James R. Curran. 2004a. The importance of supertagging for wide-coverage CCG parsing. In Proceed- ings of COLING-04, pages 282-288, Geneva, Switzerland.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Parsing the WSJ using CCG and log-linear models", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the ACL", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark and James R. Curran. 2004b. Parsing the WSJ using CCG and log-linear models. In Proceedings of the 42nd Meeting of the ACL, pages 104-111, Barcelona, Spain.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Object-extraction and question-parsing using CCG", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the EMNLP Conference", "volume": "", "issue": "", "pages": "111--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark, Mark Steedman, and James R. Curran. 2004. Object-extraction and question-parsing using CCG. In Proceedings of the EMNLP Conference, pages 111-118, Barcelona, Spain.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Maintaining the forest and burning out the underbrush in XTAG", "authors": [ { "first": "C", "middle": [], "last": "Doran", "suffix": "" }, { "first": "B", "middle": [], "last": "Hockey", "suffix": "" }, { "first": "P", "middle": [], "last": "Hopely", "suffix": "" }, { "first": "J", "middle": [], "last": "Rosenzweig", "suffix": "" }, { "first": "A", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "B", "middle": [], "last": "Srinivas", "suffix": "" }, { "first": "F", "middle": [], "last": "Xia", "suffix": "" }, { "first": "A", "middle": [], "last": "Nasr", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the ENVGRAM Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Doran, B. Hockey, P. Hopely, J. Rosenzweig, A. Sarkar, B. Srinivas, F. Xia, A. Nasr, and O. Rambow. 1997. Main- taining the forest and burning out the underbrush in XTAG. In Proceedings of the ENVGRAM Workshop, Madrid, Spain.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Generative models for statistical parsing with Combinatory Categorial Grammar", "authors": [ { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Meeting of the ACL", "volume": "", "issue": "", "pages": "335--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Hockenmaier and Mark Steedman. 2002. Generative models for statistical parsing with Combinatory Categorial Grammar. In Proceedings of the 40th Meeting of the ACL, pages 335-342, Philadelphia, PA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Data and Models for Statistical Parsing with Combinatory Categorial Grammar", "authors": [ { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Hockenmaier. 2003. Data and Models for Statistical Parsing with Combinatory Categorial Grammar. Ph.D. the- sis, University of Edinburgh.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Supervised grammar induction using training data with limited constituent information", "authors": [ { "first": "Rebbeca", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Meeting of the ACL", "volume": "", "issue": "", "pages": "73--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebbeca Hwa. 1999. Supervised grammar induction using training data with limited constituent information. In Pro- ceedings of the 37th Meeting of the ACL, pages 73-79, Uni- versity of Maryland, MD.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Parsing biomedical literature", "authors": [ { "first": "Matthew", "middle": [], "last": "Lease", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Second International Joint Conference on Natural Language Processing (IJCNLP-05)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Lease and Eugene Charniak. 2005. Parsing biomed- ical literature. In Proceedings of the Second Interna- tional Joint Conference on Natural Language Processing (IJCNLP-05), Jeju Island, Korea.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A comparison of algorithms for maximum entropy parameter estimation", "authors": [ { "first": "Robert", "middle": [], "last": "Malouf", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Sixth Workshop on Natural Language Learning", "volume": "", "issue": "", "pages": "49--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Malouf. 2002. A comparison of algorithms for max- imum entropy parameter estimation. In Proceedings of the Sixth Workshop on Natural Language Learning, pages 49- 55, Taipei, Taiwan.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Maximum entropy estimation for feature forests", "authors": [ { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Human Language Technology Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Miyao and Jun'ichi Tsujii. 2002. Maximum entropy estimation for feature forests. In Proceedings of the Human Language Technology Conference, San Diego, CA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Corpus-oriented grammar development for acquiring a headdriven phrase structure grammar from the Penn Treebank", "authors": [ { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Ninomiya", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the First International Joint Conference on Natural Language Processing (IJCNLP-04)", "volume": "", "issue": "", "pages": "684--693", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Miyao, Takashi Ninomiya, and Jun'ichi Tsujii. 2004. Corpus-oriented grammar development for acquiring a head- driven phrase structure grammar from the Penn Treebank. In Proceedings of the First International Joint Conference on Natural Language Processing (IJCNLP-04), pages 684-693, Hainan Island, China.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Numerical Optimization", "authors": [ { "first": "Jorge", "middle": [], "last": "Nocedal", "suffix": "" }, { "first": "Stephen", "middle": [ "J" ], "last": "Wright", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jorge Nocedal and Stephen J. Wright. 1999. Numerical Opti- mization. Springer, New York, USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Inside-outside reestimation from partially bracketed corpora", "authors": [ { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 30th Meeting of the ACL", "volume": "", "issue": "", "pages": "128--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando Pereira and Yves Schabes. 1992. Inside-outside rees- timation from partially bracketed corpora. In Proceedings of the 30th Meeting of the ACL, pages 128-135, Newark, DE.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques", "authors": [ { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Tracy", "middle": [ "H" ], "last": "King", "suffix": "" }, { "first": "Ronald", "middle": [ "M" ], "last": "Kaplan", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Crouch", "suffix": "" }, { "first": "John", "middle": [ "T" ], "last": "Maxwell", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Meeting of the ACL", "volume": "", "issue": "", "pages": "271--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. Maxwell III, and Mark Johnson. 2002. Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. In Pro- ceedings of the 40th Meeting of the ACL, pages 271-278, Philadelphia, PA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Bootstrapping statistical parsers from small datasets", "authors": [ { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Ruhlen", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Baker", "suffix": "" }, { "first": "Jeremiah", "middle": [], "last": "Crim", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 11th Conference of the European Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Steedman, Miles Osborne, Anoop Sarkar, Stephen Clark, Rebecca Hwa, Julia Hockenmaier, Paul Ruhlen, Steve Baker, and Jeremiah Crim. 2003. Bootstrapping statistical parsers from small datasets. In Proceedings of the 11th Conference of the European Association for Computational Linguistics, Budapest, Hungary.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The Syntactic Process", "authors": [ { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Steedman. 2000. The Syntactic Process. The MIT Press, Cambridge, MA.", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "num": null, "type_str": "table", "content": "
kLPLRFCatAcc
0.9999985.80 84.51 85.15 93.77
0.985.86 84.51 85.18 93.78
0.8585.89 84.50 85.19 93.71
0.885.89 84.45 85.17 93.70
0.785.52 84.07 84.79 93.72
0.684.99 83.70 84.34 93.65
FullData 87.16 85.84 86.50 93.79
Random 74.63 72.53 73.57 89.31
", "text": "Accuracy of the Parser on Section 00 generalised features for each feature type are formed by replacing words with their POS tags." }, "TABREF2": { "html": null, "num": null, "type_str": "table", "content": "
LPLRFCatAcc
k = 0
", "text": ".85 86.21 85.01 85.60 93.90 FullData 87.50 86.37 86.93 94.01" }, "TABREF3": { "html": null, "num": null, "type_str": "table", "content": "
kPrecision Recall SentAcc
0.99999 99.7180.16 17.48
0.999999.6882.09 19.13
0.99999.4985.18 22.18
0.9999.0088.95 27.69
0.9598.3491.69 34.95
0.997.8292.84 39.18
", "text": "Accuracy of the Parser on Section 23" } } } }