{ "paper_id": "I05-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:26:50.227235Z" }, "title": "Linguistically-Motivated Grammar Extraction, Generalization and Adaptation", "authors": [ { "first": "Yu-Ming", "middle": [], "last": "Hsieh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Taipei" } }, "email": "" }, { "first": "Duen-Chi", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Taipei" } }, "email": "" }, { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Taipei" } }, "email": "kchen@iis.sinica.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In order to obtain a high precision and high coverage grammar, we proposed a model to measure grammar coverage and designed a PCFG parser to measure efficiency of the grammar. To generalize grammars, a grammar binarization method was proposed to increase the coverage of a probabilistic contextfree grammar. In the mean time linguistically-motivated feature constraints were added into grammar rules to maintain precision of the grammar. The generalized grammar increases grammar coverage from 93% to 99% and bracketing F-score from 87% to 91% in parsing Chinese sentences. To cope with error propagations due to word segmentation and part-of-speech tagging errors, we also proposed a grammar blending method to adapt to such errors. The blended grammar can reduce about 20~30% of parsing errors due to error assignment of pos made by a word segmentation system.", "pdf_parse": { "paper_id": "I05-1016", "_pdf_hash": "", "abstract": [ { "text": "In order to obtain a high precision and high coverage grammar, we proposed a model to measure grammar coverage and designed a PCFG parser to measure efficiency of the grammar. To generalize grammars, a grammar binarization method was proposed to increase the coverage of a probabilistic contextfree grammar. In the mean time linguistically-motivated feature constraints were added into grammar rules to maintain precision of the grammar. The generalized grammar increases grammar coverage from 93% to 99% and bracketing F-score from 87% to 91% in parsing Chinese sentences. To cope with error propagations due to word segmentation and part-of-speech tagging errors, we also proposed a grammar blending method to adapt to such errors. The blended grammar can reduce about 20~30% of parsing errors due to error assignment of pos made by a word segmentation system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Treebanks provide instances of phrasal structures and their statistical distributions. However none of treebanks provide sufficient amount of samples which cover all types of phrasal structures, in particular, for the languages without inflectional markers, such as Chinese. It results that grammars directly extracted from treebanks suffer low coverage rate and low precision [7] . However arbitrarily generalizing applicable rule patterns may cause over-generation and increase ambiguities. It may not improve parsing performance [7] . Therefore a new approach of grammar binarization was proposed in this paper. The binarized grammars were derived from probabilistic context-free grammars (PCFG) by rule binarization. The approach was motivated by the linguistic fact that adjuncts could be arbitrarily occurred or not occurred in a phrase. The binarized grammars have better coverage than the original grammars directly extracted from treebank. However they also suffer problems of over-generation and structure-ambiguity. Contemporary grammar formalisms, such as GPSG, LFG, HPSG, take phrase structure rules as backbone for phrase structure representation and adding feature constraints to eliminate illegal or non-logical structures. In order to achieve higher coverage, the backbone grammar rules (syntactic grammar) are allowed to be over-generation and the feature constraints (semantic grammar for world knowledge) eliminate superfluous structures and increase the precision of grammar representation. Recently, probabilistic preferences for grammar rules were incorporated to resolve structure-ambiguities and had great improvements on parsing performances [2, 6, 10] . Regarding feature constrains, it was shown that contexture information of categories of neighboring nodes, mother nodes, or head words are useful for improving grammar precision and parsing performances [1, 2, 7, 10, 12] . However tradeoffs between grammar coverage and grammar precision are always inevitable. Excessive grammatical constraints will reduce grammar coverage and hence reduce parsing performances. On the other hand, loosely constrained grammars cause structure-ambiguities and also reduce parsing performances. In this paper, we consider grammar optimization in particular for Chinese language. Linguistically-motivated feature constraints were added to the grammar rules and evaluated to maintain both grammar coverage and precision. In section 2, the experimental environments were introduced. Grammar generalization and specialization methods were discussed in section 3. Grammars adapting to pos-tagging errors were discussed in section 4. Conclusions and future researches were stated in the last section.", "cite_spans": [ { "start": 377, "end": 380, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 532, "end": 535, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 1668, "end": 1671, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 1672, "end": 1674, "text": "6,", "ref_id": "BIBREF5" }, { "start": 1675, "end": 1678, "text": "10]", "ref_id": "BIBREF9" }, { "start": 1884, "end": 1887, "text": "[1,", "ref_id": "BIBREF0" }, { "start": 1888, "end": 1890, "text": "2,", "ref_id": "BIBREF1" }, { "start": 1891, "end": 1893, "text": "7,", "ref_id": "BIBREF6" }, { "start": 1894, "end": 1897, "text": "10,", "ref_id": "BIBREF9" }, { "start": 1898, "end": 1901, "text": "12]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The complete research environment, as shown in the figure 1, comprises of the following five modules and functions. a) Word segmentation module: identify words including out-of-vocabulary word and provide their syntactic categories. b) Grammar construction module: extract and derive (perform rule generalization, specialization and adaptation processes) probabilistic grammars from treebanks. c) PCFG parser: parse input sentences. d) Evaluation module: evaluate performances of parsers and grammars. e) Semantic role assignment module: resolve semantic relations for constituents. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Research Environments", "sec_num": "2" }, { "text": "Grammars are extracted from Sinica Treebank [4, 5] . Sinica Treebank version 2.0 contains 38,944 tree-structures and 230,979 words. It provides instances of phrasal structures and their statistical distributions. In Sinica Treebank, each sentence is annotated with its syntactic structure and semantic roles for constituents in a dependency framework. Since the Treebank cannot provide sufficient amount of samples which cover all types of phrasal structures, it results that grammars directly extracted from treebanks suffer low coverage rate [5] . Therefore grammar generalization and specialization processes are carried out to obtain grammars with better coverage and precision. The detail processes will be discussed in section 3.", "cite_spans": [ { "start": 44, "end": 47, "text": "[4,", "ref_id": "BIBREF3" }, { "start": 48, "end": 50, "text": "5]", "ref_id": "BIBREF4" }, { "start": 544, "end": 547, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar Extraction Module", "sec_num": "2.1" }, { "text": "The probabilistic context-free parsing strategies were used as our parsing model [2, 6, 8] . Calculating probabilities of rules from a treebank is straightforward and we use maximum likelihood estimation to estimate the rule probabilities, as in [2] . The parser adopts an Earley's Algorithm [8] . It is a top-down left-to-right algorithm. The results of binary structures will be normalized into a regular phrase structures by removing intermediate nodes, if used grammars are binarized grammars. Grammar efficiency will be evaluated according to its parsing performance.", "cite_spans": [ { "start": 81, "end": 84, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 85, "end": 87, "text": "6,", "ref_id": "BIBREF5" }, { "start": 88, "end": 90, "text": "8]", "ref_id": "BIBREF7" }, { "start": 246, "end": 249, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 292, "end": 295, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "PCFG Parser and Grammar Performance Evaluation", "sec_num": "2.2" }, { "text": "Three sets of testing data were used in our performance evaluation. Their basic statistics are shown in Table 1 . Each set of testing data represents easy, hard and moderate respectively. The token coverage of a set of rules is the ceiling of parsing algorithm to achieve. Tradeoff effects between grammar coverage and parsing F-score can be examined for each set of rules.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiments and Performance Evaluation", "sec_num": "2.3" }, { "text": "By using above mentioned research environment, we intend to find out most effective grammar generalization method and specialization features for Chinese language. To extend an existing or extracted grammar, there are several different approaches. A na\u00efve approach is to generalize a fine-grained rule to a coarse-grained rule. The approach does not generate new patterns. Only the applicable patterns for each word were increased. However it was shown that arbitrarily increasing the applicable rule patterns does increase the coverage rates of grammars, but degrade parsing performance [5] . A better approach is to generalizing and specializing rules under linguistically-motivated way.", "cite_spans": [ { "start": 588, "end": 591, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar Generalization and Specialization", "sec_num": "3" }, { "text": "The length of a phrase in Treebank is variable and usually long phrases suffer from low probability. Therefore most PCFG approaches adopt the binary equivalence grammar, such as Chomsky normal form (CNF). For instance, a grammar rule of S NP Pp Adv V can be replaced by the set of equivalent rules of {S Np R0, R0 Pp R1, R1 Adv V}. The binarization method proposed in our system is different from CNF. It generalizes the original grammar to broader coverage. For instance, the above rule after performing right-association binarization 1 will produce following three binary rules {S Np S', S' Pp S', S' Adv V}. It results that constituents (adjuncts and arguments) can be occurred or not occurred at almost any place in the phrase. It partially fulfilled the linguistic fact that adjuncts in a phrase are arbitrarily occurred. However it also violated the fact that arguments do not arbitrarily occur. Experimental results of the Sinica testing data showed that the grammar token coverage increased from 92.8% to 99.4%, but the labeling F-score dropped from 82.43% to 82.11% [7] . Therefore feature constraints were added into binary rules to limit over-generation caused by recursively adding constituents into intermediate-phrase types, such as S' at above example.", "cite_spans": [ { "start": 1075, "end": 1078, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Binary Grammar Generation, Generalization, and Specialization", "sec_num": "3.1" }, { "text": "Feature attached rules will look like following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binary Grammar Generation, Generalization, and Specialization", "sec_num": "3.1" }, { "text": "S' -left:Adv-head:V Adv V; S' -left:Pp-head:V Pp S' -left:Adv-head:V ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binary Grammar Generation, Generalization, and Specialization", "sec_num": "3.1" }, { "text": "The intermediated node S' -left:Pp-head:V says that it is a partial S structure with leftmost constituent Pp and a phrasal head V. Here the leftmost feature constraints linear order of constituents and the head feature implies that the structure patterns are head word dependent. Both constraints are linguistically plausible. Another advantage of the feature-constraint binary grammar is that in addition to rule probability it is easy to implement association strength of modifier word and head word to evaluate plausibility of derived structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binary Grammar Generation, Generalization, and Specialization", "sec_num": "3.1" }, { "text": "Adding feature constraints into grammar rules attempts to increase precision of grammar representation. However the side-effect is that it also reduces grammar coverage. Therefore grammar design is balanced between its precision and coverage. We are looking for a grammar with highest coverage and precision. The tradeoff depends on the ambiguity resolution power of adopted parser. If the ambiguity resolution power of adopted parser is strong and robust, the grammar coverage might be more important than grammar precision. On the other hand a weak parser had better to use grammars with more feature constraints. In our experiments, we consider grammars suited for PCFG parsing. The follows are some of the most important linguisticallymotivated features which have been tested. Each set of feature constraint added grammar is tested and evaluated. Table 2 shows the experimental results. Since all features have their own linguistic motivations, the result feature constrained grammars maintain high coverage and have improving grammar precision. Therefore each feature more or less improves the parsing performance and the feature of leftmost daughter node, which constrains the linear order of constituents, is the most effective feature. The Left-constraint-added grammar reduces grammar token-coverage very little and significantly increases label and bracket f-scores.", "cite_spans": [], "ref_spans": [ { "start": 852, "end": 859, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Feature Constraints for Reducing Ambiguities of Generalized Grammars", "sec_num": "3.2" }, { "text": "It is shown that all linguistically-motivated features are more or less effective. The leftmost constitute feature, which constraints linear order of constituents, is the most effective feature. The mother-node feature is the least effective feature, since syntactic structures do not vary too much for each phrase type while playing different grammatical functions in Chinese. Since all the above features are effective, we like to see the results of multi-feature combinations. Many different feature combinations were tested. The experimental results show that none of the feature combinations outperform the binary grammars with Left and Head1/0 features, even the grammar combining all features, as shown in the Table 3 and 4. Here LF-1 and BF-1 measure the label and bracket f-scores only on the sentences with parsing results (i.e. sentences failed of producing parsing results are ignored). The results show that grammar with all feature constraints has better LF-1 and BF-1 scores, since the grammar has higher precision. However the total performances, i.e. Lf and BF scores, are not better than the simpler grammar with feature constraints of Left and Head1/0, since the higher precision grammar losses slight edge on the grammar coverage. The result clearly shows that tradeoffs do exist between grammar precision and coverage. It also suggests that if a feature constraint can improve grammar precision a lot but also reduce grammar coverage a lot, it is better to treat such feature constraints as a soft constraint instead of hard constraint. Probabilistic preference for such feature parameters will be a possible implementation of soft constraint.", "cite_spans": [], "ref_spans": [ { "start": 717, "end": 724, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Feature Constraints for Reducing Ambiguities of Generalized Grammars", "sec_num": "3.2" }, { "text": "Feature constraints impose additional constraints between constituents for phrase structures. However different feature constraints serve for different functions and have different feature assignment principles. Some features serve for local constraints, such as Left, Head, and Head0/1. Those features are only assigned at local intermediate nodes. Some features are designed for external effect such as Mother Feature, which is assigned to phrase nodes and their daughter intermediate nodes. For instances, NP structures for subject usually are different from NP structures for object in English sentences [10] . NP attached with Mother-feature can make the difference. NP S rules and NP VP rules will be derived each respectively from subject NP and object NP structures. However such difference seems not very significant in Chinese. Therefore feature selection and assignment should be linguistically-motivated as shown in our experiments.", "cite_spans": [ { "start": 608, "end": 612, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.3" }, { "text": "In conclusion, linguistically-motivated features have better effects on parsing performances than arbitrarily selected features, since they increase grammar precision, but only reduce grammar coverage slightly. The feature of leftmost daughter, which constraints linear order of constituents, is the most effective feature for parsing. Other sub-categorization related features, such as mother node and head features, do not contribute parsing F-scores very much. Such features might be useful for purpose of sentence generation instead of parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "3.3" }, { "text": "Perfect testing data was used for the above experiments without considering word segmentation and pos tagging errors. However in real life word segmentation and pos tagging errors will degenerate parsing performances. The real parsing performances of accepting input from automatic word segmentation and pos tagging system are shown in the Table 5 . The na\u00efve approach to overcome the pos tagging errors was to delay some of the ambiguous pos resolution for words with lower confidence tagging scores and leave parser to resolve the ambiguous pos until parsing stage. The tagging confidence of each word is measured by the following value. ", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 347, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Adapt to Pos Errors Due to Automatic Pos Tagging", "sec_num": "4" }, { "text": ", where P(c 1,w ) and P(c 2,w ) are probabilities assigned by the tagging model for the best candidate c 1,w and the second best candidate c 2,w .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "+", "sec_num": null }, { "text": "The experimental results, Table 6 , show that delaying ambiguous pos resolution does not improve parsing performances, since pos ambiguities increase structure ambiguities and the parser is not robust enough to select the best tagging sequence. The higher confidence values mean that more words with lower confidence tagging will leave ambiguous pos tags and the results show the worse performances. Charniak et al [3] experimented with using multiple tags per word as input to a treebank parser, and came to a similar conclusion. ", "cite_spans": [ { "start": 415, "end": 418, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 26, "end": 33, "text": "Table 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "+", "sec_num": null }, { "text": "A new approach of grammar blending method was proposed to cope with pos tagging errors. The idea is to blend the original grammar with a newly extracted grammar derived from the Treebank in which pos categories are tagged by the automatic pos tagger. The blended grammars contain the original rules and the extended rules due to pos tagging errors. A 5-fold cross-validation was applied on the testing data to tune the blending weight between the original grammar and the error-adapted grammar. The experimental results show that the blended grammar of weights 8:2 between the original grammar and error-adapted grammar achieves the best results. It reduces about 20%~30% parsing errors due to pos tagging errors, shown in the Table 7 . The pure error-adapted grammar, i.e. 0:10 blending weight, does not improve the parsing performance very much ", "cite_spans": [], "ref_spans": [ { "start": 727, "end": 734, "text": "Table 7", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Blending Grammars", "sec_num": "4.1" }, { "text": "In order to obtain a high precision and high coverage grammar, we proposed a model to measure grammar coverage and designed a PCFG parser to measure efficiency of the grammar. Grammar binarization method was proposed to generalize rules and to increase the coverage of context-free grammars. Linguistically-motivated feature constraints were added into grammar rules to maintain grammar rule precision. It is shown that the feature of leftmost daughter, which constraints linear order of constituents, is the most effective feature. Other sub-categorization related features, such as mother node and head features, do not contribute parsing F-scores very much. Such features might be very useful for purpose of sentence generation instead of parsing. The best performed feature constraint binarized grammar increases the grammar coverage of the original grammar from 93% to 99% and bracketing F-score from 87% to 91% in parsing moderate hard testing data. To cope with error propagations due to word segmentation and part-of-speech tagging errors, a grammar blending method was proposed to adapt to such errors. The blended grammar can reduce about 20~30% of parsing errors due to error assignment of a pos tagging system. In the future, we will study more effective way to resolve structure ambiguities. In particular, consider the tradeoff effect between grammar coverage and precision. The balance between soft constraints and hard constraints will be focus of our future researches. In addition to rule probability, word association probability will be another preference measure to resolve structure ambiguity, in particular for conjunctive structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Researches", "sec_num": "5" }, { "text": "The reason for using right-association binarization instead of left-association or head-first association binarization is that our parsing process is from left to right. It turns out that parsing speed of right associated grammars is much faster than left-associated grammars for leftto-right parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by National Science Council under a Center Excellence Grant NSC 93-2752-E-001-001-PAE and National Digital Archives Program Grant NSC93-2422-H-001-0004.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Context-sensitive statistics for improved grammatical language models", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "G", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 12th National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "742--747", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak, and G. Carroll, \"Context-sensitive statistics for improved grammatical lan- guage models.\" In Proceedings of the 12th National Conference on Artificial Intelligence, AAAI Press, pp. 742-747, Seattle, WA, 1994,", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Treebank grammars", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Thirteenth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1031--1036", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak, \"Treebank grammars.\" In Proceedings of the Thirteenth National Conference on Artificial Intelligence, pp. 1031-1036. AAAI Press/MIT Press, 1996.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Taggers for Parsers", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "G", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "J", "middle": [], "last": "Adcock", "suffix": "" }, { "first": "A", "middle": [], "last": "Cassanda", "suffix": "" }, { "first": "Y", "middle": [], "last": "Gotoh", "suffix": "" }, { "first": "J", "middle": [], "last": "Katz", "suffix": "" }, { "first": "M", "middle": [], "last": "Littman", "suffix": "" }, { "first": "J", "middle": [], "last": "Mccann", "suffix": "" } ], "year": 1996, "venue": "", "volume": "85", "issue": "", "pages": "1--2", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak, and G. Carroll, J. Adcock, A. Cassanda, Y. Gotoh, J. Katz, M. Littman, J. Mccann, \"Taggers for Parsers\", Artificial Intelligence, vol. 85, num. 1-2, 1996.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Sinica Treebank", "authors": [ { "first": "Feng-Yi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Pi-Fang", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Chu-Ren", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "4", "issue": "", "pages": "87--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng-Yi Chen, Pi-Fang Tsai, Keh-Jiann Chen, and Huang, Chu-Ren, \"Sinica Treebank.\" Computational Linguistics and Chinese Language Processing, 4(2):87-103, 2000.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Chinese Treebanks and Grammar Extraction", "authors": [ { "first": "Yu-Ming", "middle": [], "last": "Keh-Jiann Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Hsieh", "suffix": "" } ], "year": 2004, "venue": "the First International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keh-Jiann Chen and, Yu-Ming Hsieh, \"Chinese Treebanks and Grammar Extraction.\" the First International Joint Conference on Natural Language Processing (IJCNLP-04), March 2004.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Head-Driven Statistical Models for Natural Language parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins, \"Head-Driven Statistical Models for Natural Language parsing.\" Ph.D. thesis, Univ. of Pennsylvania, 1999.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Grammar extraction, generalization and specialization", "authors": [ { "first": "Yu-Ming", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "Duen-Chi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ROCLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu-Ming Hsieh, Duen-Chi Yang and Keh-Jiann Chen, \"Grammar extraction, generaliza- tion and specialization. ( in Chinese)\"Proceedings of ROCLING 2004.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Schutze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning and Hinrich Schutze, \"Foundations of Statistical Natural Lan- guage Processing.\" the MIT Press, Cambridge, Massachusetts, 1999.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "PCFG models of linguistic tree representations", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "", "pages": "613--632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson, \"PCFG models of linguistic tree representations.\" Computational Linguis- tics, Vol.24, pp.613-632, 1998.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Accurate Unlexicalized Parsing", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceeding of the 4lst Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "423--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D. Manning, \"Accurate Unlexicalized Parsing.\" Proceeding of the 4lst Annual Meeting of the Association for Computational Linguistics, pp. 423-430, July 2003.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Shallow Semantic Parsing of Chinese", "authors": [ { "first": "Honglin", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2004, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Honglin Sun and Daniel Jurafsky, \"Shallow Semantic Parsing of Chinese.\" Proceedings of NAACL 2004.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Statistical Chinese Parser ICTPROP", "authors": [ { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Bai", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhang, Qun Liu, Kevin Zhang, Gang Zou and Shuo Bai, \"Statistical Chinese Parser ICTPROP.\" Technology Report, Institute of Computing Technology, 2003.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The system diagram of CKIP parsing environment", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "-si jian qiu. \"He asked Lisi to pick up the ball.\" Tree-structure: S(agent:NP(Head:Nh:\u4ed6)|Head:VF:\u53eb|goal:NP(Head:Nb:\u674e\u674e)|theme:VP(Head:VC:\u64bf| goal:NP(Head:Na:\u7403))) A sample tree-structure", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "Head (Head feature): Pos of phrasal head will propagate to all intermediate nodes within the constituent. Example:S(NP(Head:Nh:\u4ed6)|S' -VF (Head:VF:\u53eb|S' -VF (NP(Head:Nb:\u674e\u56db)| VP(Head:VC:\u64bf| NP(Head:Na:\u7403))))) Linguistic motivations: Constrain sub-categorization frame.Left (Leftmost feature): The pos of the leftmost constitute will propagate one-level to its intermediate mother-node only. Example:S(NP(Head:Nh:\u4ed6)|S' -Head:VF (Head:VF:\u53eb|S' -NP (NP(Head:Nb:\u674e\u56db)| VP(Head:VC:\u64bf| NP(Head:Na:\u7403))))) Linguistic motivation: Constraint linear order of constituents. Mother (Mother-node): The pos of mother-node assigns to all daughter nodes. Example:S(NP -S (Head:Nh:\u4ed6)|S'(Head:VF:\u53eb|S'(NP -S (Head:Nb:\u674e\u56db)|VP -S (Head:VC: \u64bf | NP -VP (Head:Na: \u7403 ))))) Linguistic motivation: Constraint syntactic structures for daughter nodes. Head0/1 (Existence of phrasal head): If phrasal head exists in intermediate node, the nodes will be marked with feature 1; otherwise 0. Example:S(NP(Head:Nh: \u4ed6 )|S' -1 (Head:VF: \u53eb |S' -0 (NP(Head:Nb: motivation: Enforce unique phrasal head in each phrase.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "content": "
Testing dataSourceshardness# of short sentence (1-5 words)# of normal sentences (6-10 words)# of long sentences (>11 words)Total sentences
SinicaBalanced corpus moderate 6123851241,121
SinoramaMagazineharder428424104956
TextbookElementary school easy1,159566251,750
", "text": "Three sets of testing data were used in our experiments", "type_str": "table", "num": null, "html": null }, "TABREF1": { "content": "
(a)Binary rules without features(b)Binary+Left
SinicaSnoramaTextbookSinicaSinorama Textbook
RC-Type 95.632 94.02694.47995.07493.82394.464
RC-Token 99.422 99.13999.41799.01298.75699.179
LP81.5177.4584.4286.2780.2886.67
LR82.7377.0385.0986.1880.0087.23
LF82.1177.2484.7586.2280.1486.94
BP87.7385.3189.6690.4386.7190.84
BR89.1684.9190.5290.4686.4191.57
BF88.4485.1190.0990.4586.5691.20
(c)Binary+Head(d)Binary+Mother
SinicaSnoramaTextbookSinicaSinorama Textbook
RC-Type 94.595 93.47494.48094.73794.08292.985
RC-Token 98.919 98.74099.21598.91998.62898.857
LP83.6877.9685.5281.8778.0083.77
LR83.7577.8386.1082.8376.9584.58
LF83.7177.9085.8182.3577.4784.17
BP89.4985.2990.1787.8585.4488.47
BR89.5985.1590.9188.8484.6689.57
BF89.5485.2290.5488.3485.0589.01
", "text": "Performance evaluations for different features", "type_str": "table", "num": null, "html": null }, "TABREF2": { "content": "
(a) Binary+Left+Head1/0(b) Binary+Left+Head
SinicaSinoramaTextbookSinicaSinoramaTextbook
RC-Type 94.887 93.74594.38192.87991.85392.324
RC-Token 98.975 98.74099.16798.17398.02298.608
LF86.5479.8187.6886.0079.5386.86
BF90.6986.1691.3990.1086.0690.91
LF-186.7179.9887.7386.7679.8687.16
BF-190.8686.3491.4590.8986.4291.22
Binary+Left+Head+Mother+Head1/0
SinicaSinoramaTextbook
RC-Type90.70990.46090.538
RC-Token96.90696.69897.643
LF86.7578.3886.19
BF90.5485.2090.07
LF-188.5679.5587.84
BF-192.4486.4691.80
", "text": "Performances of grammars with different feature combinations Performances of the grammar with most feature constraints", "type_str": "table", "num": null, "html": null }, "TABREF3": { "content": "
pos tagging
Binary+Left+Head1/0
SinicaSinoramaTextbook
LF76.1864.5373.61
BF84.0175.9584.28
", "text": "Parsing performances of inputs produced by the automatic word segmentation and", "type_str": "table", "num": null, "html": null }, "TABREF4": { "content": "
Confidence value=0.5
Sinica Sinorama Textbook
LF75.9264.1474.66
BF83.4875.2283.65
Confidence value=0.8
Sinica Sinorama Textbook
LF75.3763.1773.76
BF83.3274.5083.33
Confidence value=1.0
Sinica Sinorama Textbook
LF74.1261.2569.44
BF82.5773.1781.17
", "text": "Parsing performances for different confidence level of pos ambiguities", "type_str": "table", "num": null, "html": null }, "TABREF5": { "content": "
Error-adaptedgrammari.e.Blending weight 8:2
blending weight (0:10)
SinicaSinirama Textbook SinicaSinirama Textbook
LF75.9966.1671.9278.0466.4974.69
BF85.6577.8985.0486.0677.8285.91
", "text": "Performances of the blended grammars", "type_str": "table", "num": null, "html": null } } } }