Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I05-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:24:58.295944Z"
},
"title": "High Efficiency Realization for a Wide-Coverage Unification Grammar",
"authors": [
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sussex",
"location": {}
},
"email": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We give a detailed account of an algorithm for efficient tactical generation from underspecified logical-form semantics, using a wide-coverage grammar and a corpus of real-world target utterances. Some earlier claims about chart realization are critically reviewed and corrected in the light of a series of practical experiments. As well as a set of algorithmic refinements, we present two novel techniques: the integration of subsumption-based local ambiguity factoring, and a procedure to selectively unpack the generation forest according to a probability distribution given by a conditional, discriminative model.",
"pdf_parse": {
"paper_id": "I05-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "We give a detailed account of an algorithm for efficient tactical generation from underspecified logical-form semantics, using a wide-coverage grammar and a corpus of real-world target utterances. Some earlier claims about chart realization are critically reviewed and corrected in the light of a series of practical experiments. As well as a set of algorithmic refinements, we present two novel techniques: the integration of subsumption-based local ambiguity factoring, and a procedure to selectively unpack the generation forest according to a probability distribution given by a conditional, discriminative model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A number of wide-coverage precise bi-directional NL grammars have been developed over the past few years. One example is the LinGO English Resource Grammar (ERG) [1] , couched in the HPSG framework. Other grammars of similar size and coverage also exist, notable examples using the LFG and the CCG formalisms [2, 3] . These grammars are used for generation from logical form input (also termed tactical generation or realization) in circumscribed domains, as part of applications such as spoken dialog systems [4] and machine translation [5] .",
"cite_spans": [
{
"start": 162,
"end": 165,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 309,
"end": 312,
"text": "[2,",
"ref_id": "BIBREF1"
},
{
"start": 313,
"end": 315,
"text": "3]",
"ref_id": "BIBREF2"
},
{
"start": 510,
"end": 513,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 538,
"end": 541,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Grammars like the ERG are lexicalist, in that the majority of information is encoded in lexical entries (or lexical rules) as opposed to being represented in constructions (i.e. rules operating on phrases). The semantic input to the generator for such grammars, often, is a bag of lexical predicates with semantic relationships captured by appropriate instantiation of variables associated with predicates and their semantic roles. For these sorts of grammars and 'flat' semantic inputs, lexically-driven approaches to realization -such as Shake-and-Bake [6] , bag generation from logical form [7] , chart generation [8] , and constraint-based generation [9] -are highly suitable. Alternative approaches based on semantic head-driven generation and more recent variants [10, 11] would work less well for lexicalist grammars since these approaches assume a hierarchically structured input logical form.",
"cite_spans": [
{
"start": 555,
"end": 558,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 594,
"end": 597,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 617,
"end": 620,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 655,
"end": 658,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 770,
"end": 774,
"text": "[10,",
"ref_id": "BIBREF9"
},
{
"start": 775,
"end": 778,
"text": "11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Similarly to parsing with large scale grammars, realization can be computationally expensive. In his presentation of chart generation, Kay [8] describes one source of potential inefficiency and proposes an approach for tackling it. However, Kay does not report on a verification of his approach with an actual grammar. Carroll et al. [12] Dan Flickinger and Ann Copestake contributed a lot to the work described in this paper. We also thank Berthold Crysmann, Jan Tore L\u00f8nning and Bob Moore for useful discussions. Funding is from the projects COGENT (UK EPSRC) and LOGON (Norwegian Research Council).",
"cite_spans": [
{
"start": 139,
"end": 142,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 334,
"end": 338,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "h1, { h1:proposition m(h2), h3: run v(e4, x5), h3:past(e4), h6: the q(x5, h7, h8), h9: athlete n(x5), h9: young a(x5), h9: polish a(x5) }, { h2 =q h3, h8 =q h9 } Fig. 1 . Simplified MRS for an utterance like the young Polish athlete ran (and variants) . Elements from the bag of EPs are linked through both scopal and 'standard' logical variables. present a practical evaluation of chart generation efficiency with a large-scale HPSG grammar, and describe a different approach to the problem which becomes necessary when using a wide-coverage grammar. White [3] identifies further inefficiencies, and describes and evaluates strategies for addressing them, albeit using what appears to be a somewhat task-specific rather than genuine wide-coverage grammar. In this paper, we revisit this previous work and present new, improved algorithms for efficient chart generation; taken together these result in (i) practical performance that improves over a previous implementation by two orders of magnitude, and (ii) throughput that is near linear in the size of the input semantics.",
"cite_spans": [
{
"start": 237,
"end": 251,
"text": "(and variants)",
"ref_id": null
},
{
"start": 558,
"end": 561,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 162,
"end": 168,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2, we give an overview of the grammar and the semantic formalism we use, recap the basic chart generation procedure, and discuss the various sources of potential inefficiency in the basic approach. We then describe the algorithmic improvements we have made to tackle these problems (Section 3), and conclude with the results of evaluating these improvements (Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Minimal Recursion Semantics (MRS) [13] is a popular member of a family of flat, underspecified, event-based (neo-Davidsonian) frameworks for computational semantics that have been in wide use since the mid-1990s. MRS allows both underspecification of scope relations and generalization over classes of predicates (e.g. two-place temporal relations corresponding to distinct lexical prepositions: English in May vs. on Monday, say), which renders it an attractive input representation for tactical generation. While an in-depth introduction to MRS is beyond the scope of this paper, Figure 1 shows an example semantics that we will use in the following sections. The truth-conditional core is captured as a flat multi-set (or 'bag') of elementary predications (EPs), combined with generalized quantifiers and designated handle variables to account for scopal relations. The bag of EPs is complemented by the handle of the top-scoping EP (h 1 in our example) and a set of 'handle constraints' recording restrictions on scope relations in terms of dominance relations.",
"cite_spans": [
{
"start": 34,
"end": 38,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 582,
"end": 590,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Minimal Recursion Semantics and the LinGO ERG",
"sec_num": "2.1"
},
{
"text": "The LinGO ERG [1] is a general-purpose, open-source HPSG implementation with fairly comprehensive lexical and grammatical coverage over a variety of domains and genres. The grammar has been deployed for diverse NLP tasks, including machine translation of spoken and edited language, email auto response, consumer opinion tracking (from newsgroup data), and some question answering work. 1 The ERG uses MRS as its meaning representation layer, and the grammar distribution includes treebanked versions of several reference corpora -providing disambiguated and hand-inspected 'gold' standard MRS formulae for each input utterance -of which we chose one of the more complex sets for our empirical investigations of realization performance using the ERG (see Section 4 below).",
"cite_spans": [
{
"start": 14,
"end": 17,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 387,
"end": 388,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Minimal Recursion Semantics and the LinGO ERG",
"sec_num": "2.1"
},
{
"text": "Briefly, the basic chart generation procedure works as follows. A preprocessing phase indexes lexical entries, lexical rules and grammar rules by the semantics they contain. In order to find the lexical entries with which to initialize the chart, the input semantics is checked against the indexed lexicon. When a lexical entry is retrieved, the variable positions in its relations are instantiated in one-to-one correspondence with the variables in the input semantics (a process we term Skolemization, in loose analogy to the more general technique in theorem proving; see Section 3.1 below). For instance, for the MRS in Figure 1 , the lookup process would retrieve one or more instantiated lexical entries for run containing h 3 : run v(e 4 , x 5 ). Lexical and morphological rules are applied to the instantiated lexical entries. If the lexical rules introduce relations, their application is only allowed if these relations correspond to parts of the input semantics (h 3 :past(e 4 ), say, in our example). We treat a number of special cases (lexical items containing more than one relation, grammar rules which introduce relations, and semantically vacuous lexical items) in the same way as Carroll et al. [12] .",
"cite_spans": [
{
"start": 1213,
"end": 1217,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 624,
"end": 632,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Basic Procedure",
"sec_num": "2.2"
},
{
"text": "After initializing the chart (with inactive edges), active edges are created from inactive ones by instantiating the head daughter of a rule; the resulting edges are then combined with other inactive edges. Chart generation is very similar to chart parsing, but what an edge covers is defined in terms of semantics, rather than orthography. Each edge is associated with the set of relations it covers. Before combining two edges a check is made to ensure that edges do not overlap: i.e. that they do not cover the same relation(s). The goal is to find all possible inactive edges covering the full input MRS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Basic Procedure",
"sec_num": "2.2"
},
{
"text": "The worst-case time complexity of chart generation is exponential (even though chart parsing is polynomial). The main reason for this is that in theory a grammar could allow any pair of edges to combine (subject to the restriction described above that the edges cover non-overlapping bags of EPs). For an input semantics containing n EPs, and assuming each EP retrieves a single lexical item, there could in the worst case be O(2 n ) edges, each covering a different subset of the input semantics. Although in the general case we cannot improve the complexity, we can make the processing steps involved cheaper, for instance efficiently checking whether two edges are candidates for being combined (see Section 3.1 below). We can also minimize the number of edges covering each subset of EPs by 'packing' locally equivalent edges (Section 3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": "2.3"
},
{
"text": "A particular, identifiable source of complexity is that, as Kay [8] notes, when a word has more than one intersective modifier an indefinite number of its modifiers may be applied. For instance, when generating from the MRS in Figure 1 , edges corresponding to the partial realizations athlete, young athlete, Polish athlete, and young Polish athlete will all be constructed. Even if a grammar constrains modifiers so there is only one valid ordering, or the generator is able to pack equivalent edges covering the same EPs, the number of edges built will still be 2 n , because all possible complete and incomplete phrases will be built. Using the example MRS, ultimately useless edges such as the young athlete ran (omitting Polish) will be created.",
"cite_spans": [
{
"start": 64,
"end": 67,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 227,
"end": 235,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Complexity",
"sec_num": "2.3"
},
{
"text": "Kay proposes an approach to this problem in which edges are checked before they are created to see if they would 'seal off' access to a semantic index (x 5 in this case) for which there is still an unincorporated modifier. Although individual sets of modifiers still result in exponential numbers of edges, the exponentiality is prevented from propagating further. However, Carroll et al. [12] argue that this check works only in limited circumstances, since for example in (1) the grammar must allow the index for ran to be available all the way up the tree to How, and simultaneously also make available the indexes for newspapers, say, and athlete at appropriate points so these words could be modified 2 .",
"cite_spans": [
{
"start": 389,
"end": 393,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": "2.3"
},
{
"text": "(1) How quickly did the newspapers say the athlete ran? Carroll et al. describe an alternative technique which adjoins intersective modifiers into edges in a second phase, after all possible edges that do not involve intersective modification have been constructed by chart generation. This overcomes the multiple index problem described above and reduces the worst-case complexity of intersective modification in the chart generation phase to polynomial, but unfortunately the subsequent phase which attempts to adjoin sets of modifiers into partial realizations is still exponential. We describe below (Section 3.3) a related technique which delays processing of intersective modifiers by inserting them into the generation forest, taking advantage of dynamic programming to reduce the complexity of the second phase. We also present a different approach which filters out edges based on accessibility of sets of semantic indices (Section 3.4), which covers a wider variety of cases than just intersective modification, and in practice is even more efficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": "2.3"
},
{
"text": "Exponential numbers of edges imply exponential numbers of realizations. For an application task we would usually want only one (the most natural or fluent) realization, or a fixed small number of good realizations that the application could then itself select from. In Section 3.5 we present an efficient algorithm for selectively unpacking the generation forest to produce the n-best realizations according to a statistical model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": "2.3"
},
{
"text": "Once lexical lookup is complete and up until a final, post-generation comparison of results to the input MRS, the core phases of our generator exclusively operate on typed feature structures (which are associated to chart edges). For efficiency reasons, our algorithm avoids any complex operations on the original logical-form input MRS. In order to best guide the search from the input semantics, however, we employ two techniques that relate components of the logical form to corresponding sub-structures in the feature structure (FS) universe: (i) Skolemization of variables and (ii) indexing by EP coverage. Of these, only the latter we find commonly discussed in the literature, but we expect some equivalent of making variables ground to be present in most implementations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relating Chart Edges and Semantic Components",
"sec_num": "3.1"
},
{
"text": "As part of the process of looking up lexical items and grammar rules introducing semantics in order to initialize the generator chart, all FS correspondences to logical variables from the input MRS are made 'ground' by specializing the relevant sub-structure with Skolem constants uniquely reflecting the underlying variable, for example adding constraints like [SKOLEM \"x5\"] for all occurrences of x 5 from our example MRS. Skolemization, thus, assumes that distinct variables from the input MRS, where supplied, cannot become co-referential during generation. Enforcing variable identity at the FS level makes sure that composition (by means of FS unification) during rule applications is compatible to the input semantics. In addition, it enables efficient pre-unification filtering (see 'quick-check' below), and is a prerequisite for our index accessibility test described in Section 3.4 below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relating Chart Edges and Semantic Components",
"sec_num": "3.1"
},
{
"text": "In chart parsing, edges are stored into and retrieved from the chart data structure on the basis of their string start and end positions. This ensures that the parser will only retrieve pairs of chart edges that cover compatible segments of the input string (i.e. that are adjacent with respect to string position). In chart generation, Kay [8] proposed indexing the chart on the basis of logical variables, where each variable denotes an individual entity in the input semantics, and making the edge coverage compatibility check a filter. Edge coverage (with respect to the EPs in the input semantics) would be encoded as a bit vector, and for a pair of edges to be combined their corresponding bit vectors would have to be disjoint.",
"cite_spans": [
{
"start": 337,
"end": 344,
"text": "Kay [8]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relating Chart Edges and Semantic Components",
"sec_num": "3.1"
},
{
"text": "We implement Kay's edge coverage approach, using it not only when combining active and inactive edges, but also for two further tasks in our approach to realization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relating Chart Edges and Semantic Components",
"sec_num": "3.1"
},
{
"text": "\u2022 in the second phase of chart generation to determine which intersective modifier(s) can be adjoined into a partially incomplete subtree; and \u2022 as part of the test for whether one edge subsumes another, for local ambiguity factoring (see Section 3.2 below) 3 .",
"cite_spans": [
{
"start": 258,
"end": 259,
"text": "3",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relating Chart Edges and Semantic Components",
"sec_num": "3.1"
},
{
"text": "In our testing with the LinGO ERG, many hundreds or thousands of edges may be produced for non-trivial input semantics, but there are only a relatively small number of logical variables. Indexing edges on these variables involves bookkeeping that turns out not to be worthwhile in practice; logical bit vector operations on edge coverage take negligible time, and these serve to filter out the majority of edge combinations with incompatible indices. The remainder are filtered out efficiently before unification is attempted by a check on which rules can dominate which others, and the quick-check, as developed for unification-based parsing [14] . For the quick-check, it turns out that the same set of feature paths that most frequently lead to unification failure in parsing also work well in generation.",
"cite_spans": [
{
"start": 643,
"end": 647,
"text": "[14]",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relating Chart Edges and Semantic Components",
"sec_num": "3.1"
},
{
"text": "In chart parsing with context free grammars, the parse forest (a compact representation of the full set of parses) can only be computed in polynomial time if sub-analyses dominated by the same non-terminal and covering the same segment of the input string are 'packed', or factored into a single unitary representation [15] . Similar benefits accrue for unification grammars without a context free backbone such as the LinGO ERG, if the category equality test is replaced by feature structure subsumption [16] 4 ; also, feature structures representing the derivation history need to be restricted out when applying a rule [17] . The technique can be applied to chart realization if the input span is expressed as coverage of the input semantics. For example, with the input of Figure 1 , the two phrases in (2) below would have equivalent feature structures, and we pack the one found second into the one found first, which then acts as the representative edge for all subsequent processing.",
"cite_spans": [
{
"start": 319,
"end": 323,
"text": "[15]",
"ref_id": "BIBREF14"
},
{
"start": 505,
"end": 509,
"text": "[16]",
"ref_id": "BIBREF15"
},
{
"start": 510,
"end": 511,
"text": "4",
"ref_id": "BIBREF3"
},
{
"start": 622,
"end": 626,
"text": "[17]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 777,
"end": 785,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Local Ambiguity Factoring",
"sec_num": "3.2"
},
{
"text": "(2) young Polish athlete | Polish young athlete We have found that packing is crucial to efficiency: realization time is improved by more than an order of magnitude for inputs with more than 500 realizations (see Section 4). Changing packing to operate with respect just to feature structure equality rather than subsumption degrades throughput significantly, resulting in worse overall performance than with packing disabled completely: in other words, equivalence-only packing fails to recoup the cost of the feature structure comparisons involved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Ambiguity Factoring",
"sec_num": "3.2"
},
{
"text": "A further technique we use is to postpone the creation of feature structures for active edges until they are actually required for a unification operation, since many end up as dead ends. Oepen and Carroll [18] do a similar thing in their 'hyper-active' parsing strategy, for the same reason.",
"cite_spans": [
{
"start": 206,
"end": 210,
"text": "[18]",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Ambiguity Factoring",
"sec_num": "3.2"
},
{
"text": "As discussed in Section 2.3, Carroll et al. [12] adjoin intersective modifiers into each partial tree extracted from the forest; their algorithm searches for partitions of modifier phrases to adjoin, and tries all combinations. This process adds an exponential (in the number of modifiers) factor to the complexity of extracting each partial realization. This is obviously unsatisfactory, and in practice is slow for larger problems when there are many possible modifiers. We have devised a better approach which delays processing of intersective modifiers by inserting them into the generation forest at appropriate locations before the forest is unpacked. By doing this, we take advantage of the dynamic programming-based procedure for unpacking the forest to reduce the complexity of the second phase. The procedure is even more efficient if realizations are unpacked selectively (section 3.5).",
"cite_spans": [
{
"start": 44,
"end": 48,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Delayed Modifier Insertion",
"sec_num": "3.3"
},
{
"text": "Kay's original proposal for dealing efficiently with modifiers founders because more than one semantic index may need to be accessible at any one time (leading to the alternative solutions of modifier adjunction, and of chunking the input semantics -see Sections 2.3 and 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Index Accessibility Filtering",
"sec_num": "3.4"
},
{
"text": "However, it turns out that Kay's proposal can form the basis of a more generally applicable approach to the problem. We assume that we have available an operation collect-semantic-vars() that traverses a feature structure and returns the set of semantic indices that it makes available 5 . We store in each chart edge two sets: one of semantic variables in the feature structure that are accessible (that is, they are present in the feature structure and could potentially be picked by another edge when it is combined with this one), and a second set of inaccessible semantic variables (ones that were once accessible but no longer are). Then,",
"cite_spans": [
{
"start": 286,
"end": 287,
"text": "5",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Index Accessibility Filtering",
"sec_num": "3.4"
},
{
"text": "\u2022 when an active edge is combined with an inactive edge, the accessible sets and inaccessible sets in the resulting edge are the union of the corresponding sets in the original edges; \u2022 when an inactive edge is created, its accessible set is computed to be the semantic indices available in its feature structure, and the variables that used to be accessible but are no longer in the accessible set are added to its inaccessible set, i.e. \u2022 immediately after creating an inactive edge, each EP in the input semantics that the edge does not (yet) cover is inspected, and if the EP's index is in the edge's inaccessible set then the edge is discarded (since there is no way in the future that the EP could be integrated with any extension of the edge's semantics).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Index Accessibility Filtering",
"sec_num": "3.4"
},
{
"text": "A nice property of this new technique is that it applies more widely than to just intersective modification: for instance, if the input semantics were to indicate that a phrase should be negated, no edges would be created that extended that phrase without the negation being present. Section 4 shows this technique results in dramatic improvements in realization efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Index Accessibility Filtering",
"sec_num": "3.4"
},
{
"text": "The selective unpacking procedure outlined in this section allows us to extract a small set of n-best realizations from the generation forest at minimal cost. The global rank order is determined by a conditional Maximum Entropy (ME) model -essentially an adaptation of recent HPSG parse selection work to the realization ranking task [19] . We use a similar set of features to Toutanova and Manning [20] , but our procedure differs from theirs in that it applies the stochastic model before unpacking, in a guided search through the generation forest. Thus, we avoid enumerating all candidate realizations. Unlike Malouf and van Noord [21] , on the other hand, we avoid an approximative beam search during forest creation and guarantee to produce exactly the n-best realizations (according to the ME model). Further looking at related parse selection work, our procedure is probably most similar to those of Geman and Johnson [22] given two ways of decomposing 6 , there will be three candidate ways of instantiating 2 and six for 4 , respectively, for a total of nine full trees.",
"cite_spans": [
{
"start": 334,
"end": 338,
"text": "[19]",
"ref_id": "BIBREF18"
},
{
"start": 399,
"end": 403,
"text": "[20]",
"ref_id": "BIBREF19"
},
{
"start": 635,
"end": 639,
"text": "[21]",
"ref_id": "BIBREF20"
},
{
"start": 926,
"end": 930,
"text": "[22]",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selective Unpacking",
"sec_num": "3.5"
},
{
"text": "Tsujii [23] , but neither provide a detailed discussion of the dependencies between locality of ME features and the complexity of the read-out procedure from a packed forest. Two key notions in our selective unpacking procedure are the concepts of (i) decomposing an edge locally into candidate ways of instantiating it and of (ii) nested contexts of 'horizontal' search for ranked hypotheses (i.e. uninstantiated edges) about candidate subtrees. See Figure 2 for examples of edge decomposition, but note that the 'depth' of each local cross-product needs to correspond to the maximum required context size of ME features; for ease of exposition, our examples assume a context size of no more than depth one (but the algorithm straightforwardly generalizes to larger contexts). Given one decomposition -i.e. a vector of candidate daughters to a token construction -there can be multiple ways of instantiating each daughter: a parallel index vector i 0 . . . i n serves to keep track of 'vertical' search among daughter hypotheses, where each index i j denotes the i-th instantiation (hypothesis) of the daughter at position j. Hypotheses are associated with ME scores and ordered within each nested context by means of a local agenda (stored in the original representative edge, for convenience). Given the additive nature of ME scores on complete derivations, it can be guaranteed that larger derivations including an edge e as a sub-constituent on the fringe of their local context of optimization will use the best instantiation of e in their own best instantiation. The second-best larger instantiation, in turn, will be obtained from moving to the second-best hypothesis for one of the elements in the (right-hand side of the) decomposition. Therefore, nested local optimizations result in a top-down, exact n-best search through the generation forest, and matching the 'depth' of local decompositions to the maximum required ME feature context effectively prevents exhaustive cross-multiplication of packed nodes.",
"cite_spans": [
{
"start": 7,
"end": 11,
"text": "[23]",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 451,
"end": 459,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selective Unpacking",
"sec_num": "3.5"
},
{
"text": "The main function hypothesize-edge() in Figure 3 controls both the 'horizontal' and 'vertical' search, initializing the set of decompositions and pushing initial hypotheses onto the local agenda when called on an edge for the first time (lines 11 -17) . Furthermore, the procedure retrieves the current next-best hypothesis from the agenda (line 18), generates new hypotheses by advancing daughter indices (while skipping over Fig. 3 . Selective unpacking procedure, enumerating the n best realizations for a top-level result edge from the generation forest. An auxiliary function decompose-edge() performs local crossmultiplication as shown in the examples in Figure 2 . Another utility function not shown in pseudocode is advance-indices(), another 'driver' routine searching for alternate instantiations of daughter edges, e.g. advance-indices( 0 2 1 ) \u2192 { 1 2 1 0 3 1 0 2 2 }. Finally, instantiate-hypothesis() is the function that actually builds result trees, replaying the unifications of constructions from the grammar (as identified by chart edges) with the feature structures of daughter constituents.",
"cite_spans": [
{
"start": 244,
"end": 251,
"text": "11 -17)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 3",
"ref_id": null
},
{
"start": 427,
"end": 433,
"text": "Fig. 3",
"ref_id": null
},
{
"start": 661,
"end": 669,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selective Unpacking",
"sec_num": "3.5"
},
{
"text": "configurations seen earlier) and calling itself recursively for each new index (lines 19 -27) , and, finally, arranges for the resulting hypothesis to be cached for later invocations on the same edge and i values (line 28). Note that we only invoke instantiate-hypothesis() on complete, top-level hypotheses, as the ME features of Toutanova and Manning [20] can actually be evaluated prior to building each full feature structure. However, the procedure could be adapted to perform instantiation of sub-hypotheses within each local search, should additional features require it. For better efficiency, our instantiatehypothesis() routine already uses dynamic programming for intermediate results.",
"cite_spans": [
{
"start": 86,
"end": 93,
"text": "19 -27)",
"ref_id": null
},
{
"start": 353,
"end": 357,
"text": "[20]",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selective Unpacking",
"sec_num": "3.5"
},
{
"text": "Below we present an empirical evaluation of each of the refinements discussed in Sections 3.2 through 3.5. Using the LinGO ERG and its 'hike' treebank -a 330-sentence Table 1 . Realization efficiency for various instantiations of our algorithm. The table is broken down by average ambiguity rates, the first two columns showing the number of items per aggregate and average string length. Subsequent columns show relative cpu time of one-and two-phase realization with or without packing and filtering, shown as a relative multiplier of the baseline performance in the 1p+f+ column. The rightmost column is for selective unpacking of up to 10 trees from the forest produced by the baseline configuration, again as a factor of the baseline. (The quality of the selected trees depends on the statistical model and the degree of overgeneration in the grammar, and is a completely separate issue which we do not address in this paper). Table 1 ); from the available reference treebanks for the ERG, 'hike' appears to be among the more complex data sets. Table 1 summarizes relative generator efficiency for various configurations, where we use the best-performing exhaustive procedure 1p + f + (one-phase generation with packing and index accessibility filtering) as a baseline. The configuration 1p \u2212 f \u2212 (onephase, no packing or filtering) corresponds to the basic procedure suggested by Kay [8] , while 2p \u2212 f \u2212 (two-phase processing of modifiers without packing and filtering) implements the algorithm presented by Carroll et al. [12] . Combining packing and filtering clearly outperforms both these earlier configurations, i.e. giving an up to 50 times speed-up for inputs with large numbers of realizations. Additional columns contrast the various techniques in isolation, thus allowing an assessment of the individual strengths of our proposals. On low-to medium-ambiguity items, for example, filtering gives rise to a bigger improvement than packing, but packing appears to flatten the curve more. Both with and without packing, filtering improves significantly over the Carroll et al. two-phase approach to intersective modifiers (i.e. comparing columns 2p \u2212 f \u2212 and 2p + f \u2212 to 1p \u2212 f + and 1p + f + , respectively), thus confirming the increased generality of our solution to the modification problem. Finally, the benefits of packing and filtering combine more than merely multiplicatively: compared to 1p \u2212 f \u2212 , just filtering gives a speed-up of 5.9, and just packing a speed-up of 4.3. At 25, the product of these factors is well below the overall reduction of 35 that we obtain from the combination of both techniques.",
"cite_spans": [
{
"start": 1390,
"end": 1393,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 1530,
"end": 1534,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 167,
"end": 174,
"text": "Table 1",
"ref_id": null
},
{
"start": 932,
"end": 939,
"text": "Table 1",
"ref_id": null
},
{
"start": 1050,
"end": 1057,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Summary",
"sec_num": "4"
},
{
"text": "items length 1p \u2212 f \u2212 2p \u2212 f \u2212 1p \u2212 f + 1p + f \u2212 2p + f \u2212 1p + f + n = 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Summary",
"sec_num": "4"
},
{
"text": "While the rightmost column in Table 1 already indicates that 10-best selective unpacking further improves generator performance by close to a factor of two, Figure 4 breaks down generation time with respect to forest creation vs. unpacking time. When plotted against increasing input complexity (in terms of the 'size' of the input MRS), forest creation appears to be a low-order polynomial (or better), whereas exhaustive packed forest creation selective unpacking exhaustive unpacking Fig. 4 . Break-down of generation times (in seconds) according to realization phases and input complexity (approximated in the number of EPs in the original MRS used for generation). The three curves are, from 'bottom' to 'top', the average time for constructing the packed generation forest, selective unpacking time (using n = 10), and exhaustive unpacking time. Note that both unpacking times are shown as increments on top of the forest creation time.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 1",
"ref_id": null
},
{
"start": 157,
"end": 165,
"text": "Figure 4",
"ref_id": null
},
{
"start": 487,
"end": 493,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Summary",
"sec_num": "4"
},
{
"text": "unpacking (necessarily) results in an exponential explosion of generation time: with more than 25 EPs, it clearly dominates total processing time. Selective unpacking, in contrast, appears only mildly sensitive to input complexity and even on complex inputs adds no more than a minor cost to total generation time. Thus, we obtain an overall observed run-time performance of our wide-coverage generator that is bounded (at least) polynomially. Practical generation times using the LinGO ERG average below or around one second for outputs of fifteen words in length, i.e. time comparable to human production.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Summary",
"sec_num": "4"
},
{
"text": "R. Dale et al. (Eds.): IJCNLP 2005, LNAI 3651, pp. 165-176, 2005. c Springer-Verlag Berlin Heidelberg 2005",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See http://www.delph-in.net/erg/ for background information on the ERG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "White [3] describes an approach to dealing with intersective modifiers which requires the grammarian to write a collection of rules that 'chunk' the input semantics into separate modifier groups which are processed separately; this involves extra manual work, and also appears to suffer from the same multiple index problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We therefore have four operations on bit vectors representing EP coverage (C) in chart edges:\u2022 concatenation of edges e1 and e2 \u2192 e3: C(e3) = OR(C(e1), C(e2));\u2022 can edges e1 and e2 combine? AND(C(e1), C(e2)) = 0;\u2022 do edges e1 and e2 cover the same EPs? C(e1) = C(e2);\u2022 do edges e1, . . . , en cover all input EPs? NOT(OR(C(e1), . . . , C(en)) = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Using subsumption-based packing means that the parse forest may represent some globally inconsistent analyses, so these must be filtered out when the forest is unpacked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Implementing collect-semantic-vars() can be efficient: searching for Skolem constants throughout the full structure, it does a similar amount of computation as a single unification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On building a more efficient grammar by exploiting types",
"authors": [
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2000,
"venue": "Natural Language Engineering",
"volume": "6",
"issue": "1",
"pages": "15--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Flickinger, D.: On building a more efficient grammar by exploiting types. Natural Language Engineering 6 (1) (2000) 15 -28",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Parallel Grammar project",
"authors": [
{
"first": "M",
"middle": [],
"last": "Butt",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Dyvik",
"suffix": ""
},
{
"first": "T",
"middle": [
"H"
],
"last": "King",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Masuichi",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Rohrer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the COLING Workshop on Grammar Engineering and Evaluation",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Butt, M., Dyvik, H., King, T.H., Masuichi, H., Rohrer, C.: The Parallel Grammar project. In: Proceedings of the COLING Workshop on Grammar Engineering and Evaluation, Taipei, Taiwan (2002) 1 -7",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Reining in CCG chart realization",
"authors": [
{
"first": "M",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 3rd International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "White, M.: Reining in CCG chart realization. In: Proceedings of the 3rd International Con- ference on Natural Language Generation, Hampshire, UK (2004)",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Generating tailored, comparative descriptions in spoken dialogue",
"authors": [
{
"first": "J",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "M",
"middle": [
"E"
],
"last": "Foster",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Lemon",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 17th International FLAIRS Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moore, J., Foster, M.E., Lemon, O., White, M.: Generating tailored, comparative descriptions in spoken dialogue. In: Proceedings of the 17th International FLAIRS Conference, Miami Beach, FL (2004)",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Som\u00e5 kapp-ete med trollet? Towards MRS-based Norwegian -English Machine Translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Dyvik",
"suffix": ""
},
{
"first": "J",
"middle": [
"T"
],
"last": "L\u00f8nning",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Beermann",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hellan",
"suffix": ""
},
{
"first": "J",
"middle": [
"B"
],
"last": "Johannessen",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Meurer",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Nordg\u00e5rd",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Ros\u00e9n",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oepen, S., Dyvik, H., L\u00f8nning, J.T., Velldal, E., Beermann, D., Carroll, J., Flickinger, D., Hellan, L., Johannessen, J.B., Meurer, P., Nordg\u00e5rd, T., Ros\u00e9n, V.: Som\u00e5 kapp-ete med trollet? Towards MRS-based Norwegian -English Machine Translation. In: Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation, Baltimore, MD (2004)",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Shake-and-bake translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Whitelock",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "610--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Whitelock, P.: Shake-and-bake translation. In: Proceedings of the 14th International Confer- ence on Computational Linguistics, Nantes, France (1992) 610 -616",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Generation of text from logical formulae",
"authors": [
{
"first": "J",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 1993,
"venue": "Machine Translation",
"volume": "8",
"issue": "",
"pages": "209--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phillips, J.: Generation of text from logical formulae. Machine Translation 8 (1993) 209 - 235",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Chart generation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "200--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kay, M.: Chart generation. In: Proceedings of the 34th Meeting of the Association for Computational Linguistics, Santa Cruz, CA (1996) 200 -204",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generating with a grammar based on tree descriptions. A constraintbased approach",
"authors": [
{
"first": "C",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Thater",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gardent, C., Thater, S.: Generating with a grammar based on tree descriptions. A constraint- based approach. In: Proceedings of the 39th Meeting of the Association for Computational Linguistics, Toulouse, France (2001)",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semantic head-driven generation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Van Noord",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "",
"pages": "30--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shieber, S., van Noord, G., Pereira, F., Moore, R.: Semantic head-driven generation. Com- putational Linguistics 16 (1990) 30 -43",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A complete, efficient sentence-realization algorithm for unification grammar",
"authors": [
{
"first": "R",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2nd International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moore, R.: A complete, efficient sentence-realization algorithm for unification grammar. In: Proceedings of the 2nd International Natural Language Generation Conference, Harriman, NY (2002) 41 -48",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An efficient chart generator for (semi-)lexicalist grammars",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Poznanski",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 7th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "86--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carroll, J., Copestake, A., Flickinger, D., Poznanski, V.: An efficient chart generator for (semi-)lexicalist grammars. In: Proceedings of the 7th European Workshop on Natural Lan- guage Generation, Toulouse, France (1999) 86 -95",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Minimal Recursion Semantics. An introduction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sag",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Pollard",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Copestake, A., Flickinger, D., Sag, I., Pollard, C.: Minimal Recursion Semantics. An intro- duction. (1999)",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A bag of useful techniques for efficient and robust parsing",
"authors": [
{
"first": "B",
"middle": [],
"last": "Kiefer",
"suffix": ""
},
{
"first": "H",
"middle": [
"U"
],
"last": "Krieger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Malouf",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "473--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiefer, B., Krieger, H.U., Carroll, J., Malouf, R.: A bag of useful techniques for efficient and robust parsing. In: Proceedings of the 37th Meeting of the Association for Computational Linguistics, College Park, MD (1999) 473 -480",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The structure of shared forests in ambiguous parsing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Billot",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the 27th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "143--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Billot, S., Lang, B.: The structure of shared forests in ambiguous parsing. In: Proceedings of the 27th Meeting of the Association for Computational Linguistics, Vancouver, BC (1989) 143 -151",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Ambiguity packing in constraint-based parsing. Practical results",
"authors": [
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1st Conference of the North American Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "162--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oepen, S., Carroll, J.: Ambiguity packing in constraint-based parsing. Practical results. In: Proceedings of the 1st Conference of the North American Chapter of the ACL, Seattle, WA (2000) 162 -169",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Using restriction to extend parsing algorithms for complex feature-based formalisms",
"authors": [
{
"first": "S",
"middle": [],
"last": "Shieber",
"suffix": ""
}
],
"year": 1985,
"venue": "Proceedings of the 23rd Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "145--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shieber, S.: Using restriction to extend parsing algorithms for complex feature-based for- malisms. In: Proceedings of the 23rd Meeting of the Association for Computational Linguis- tics, Chicago, IL (1985) 145 -152",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Performance profiling for parser engineering",
"authors": [
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2000,
"venue": "Natural Language Engineering",
"volume": "6",
"issue": "1",
"pages": "81--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oepen, S., Carroll, J.: Performance profiling for parser engineering. Natural Language Engineering 6 (1) (2000) 81 -97",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Paraphrasing treebanks for stochastic realization ranking",
"authors": [
{
"first": "E",
"middle": [],
"last": "Velldall",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 3rd Workshop on Treebanks and Linguistic Theories",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Velldall, E., Oepen, S., Flickinger, D.: Paraphrasing treebanks for stochastic realization rank- ing. In: Proceedings of the 3rd Workshop on Treebanks and Linguistic Theories, T\u00fcbingen, Germany (2004)",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Feature selection for a rich HPSG grammar using decision trees",
"authors": [
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 6th Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toutanova, K., Manning, C.: Feature selection for a rich HPSG grammar using decision trees. In: Proceedings of the 6th Conference on Natural Language Learning, Taipei, Taiwan (2002)",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Wide coverage parsing with stochastic attribute value grammars",
"authors": [
{
"first": "R",
"middle": [],
"last": "Malouf",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Van Noord",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the IJCNLP workshop Beyond Shallow Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malouf, R., van Noord, G.: Wide coverage parsing with stochastic attribute value grammars. In: Proceedings of the IJCNLP workshop Beyond Shallow Analysis, Hainan, China (2004)",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Dynamic programming for parsing and estimation of stochastic unification-based grammars",
"authors": [
{
"first": "S",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geman, S., Johnson, M.: Dynamic programming for parsing and estimation of stochastic unification-based grammars. In: Proceedings of the 40th Meeting of the Association for Computational Linguistics, Philadelphia, PA (2002)",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Maximum entropy estimation for feature forests",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Human Language Technology Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miyao, Y., Tsujii, J.: Maximum entropy estimation for feature forests. In: Proceedings of the Human Language Technology Conference, San Diego, CA (2002)",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "[incr tsdb()] at 15-apr-2005 (00:55 h))",
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "Sample generator forest and sub-node decompositions: ovals in the forest (on the left) indicate packing of edges under subsumption, i.e. edges 4 , 7 , 9 , and 11 are not in the generator chart proper. During unpacking, there will be multiple ways of instantiating a chart edge, each obtained from cross-multiplying alternate daughter sequences locally. The elements of this cross-product we call decomposition, and they are pivotal points both for stochastic scoring and dynamic programming in selective unpacking. The table on the right shows all non-leaf decompositions for our example generator forest:",
"html": null,
"content": "<table><tr><td>1 \u2192 2 3</td><td>4 3</td><td/><td/></tr><tr><td>2 \u2192 5 6</td><td>5 7</td><td/><td/></tr><tr><td>4 \u2192 8 6</td><td>8 7</td><td>9 6</td><td>9 7</td></tr><tr><td>6 \u2192 10</td><td>11</td><td/><td/></tr><tr><td>Fig. 2.</td><td/><td/><td/></tr><tr><td/><td/><td/><td>and Miyao and</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF3": {
"text": "configurations, starting from the 'gold' standard MRS formula recorded for each utterance in the treebank. At 12.8 words, average sentence length in the original 'hike' corpus is almost exactly what we see as the average length of all paraphrases obtained from the generator (see",
"html": null,
"content": "<table><tr><td>Aggregate</td><td/><td>\u03c6</td><td>\u00d7</td><td>\u00d7</td><td>\u00d7</td><td>\u00d7</td><td>\u00d7</td><td>s</td><td>\u00d7</td></tr><tr><td>500 &lt; trees</td><td>9</td><td>23.9</td><td>31.76</td><td>20.95</td><td>11.98</td><td>9.49</td><td>3.69</td><td colspan=\"2\">31.49 0.33</td></tr><tr><td>100 &lt; trees \u2264 500</td><td>22</td><td>17.4</td><td>53.95</td><td>36.80</td><td>3.80</td><td>8.70</td><td>4.66</td><td colspan=\"2\">5.61 0.42</td></tr><tr><td>50 &lt; trees \u2264 100</td><td>21</td><td>18.1</td><td>51.53</td><td>13.12</td><td>1.79</td><td>8.09</td><td>2.81</td><td colspan=\"2\">3.74 0.62</td></tr><tr><td>10 &lt; trees \u2264 50</td><td>80</td><td>14.6</td><td>35.50</td><td>18.55</td><td>1.82</td><td>6.38</td><td>3.67</td><td colspan=\"2\">1.77 0.89</td></tr><tr><td>0 \u2264 trees \u2264 10</td><td>185</td><td>10.5</td><td>9.62</td><td>6.83</td><td>1.19</td><td>6.86</td><td>3.62</td><td colspan=\"2\">0.58 0.95</td></tr><tr><td>Overall</td><td>317</td><td>12.9</td><td>35.03</td><td>20.22</td><td>5.97</td><td>8.21</td><td>3.74</td><td colspan=\"2\">2.32 0.58</td></tr><tr><td>Coverage</td><td/><td/><td>95%</td><td>97%</td><td>99%</td><td colspan=\"4\">99% 100% 100% 100%</td></tr><tr><td colspan=\"10\">collection of instructional text taken from Norwegian tourism brochures -we bench-</td></tr><tr><td colspan=\"2\">marked various generator</td><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null
}
}
}
}