Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I05-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:26:50.227235Z"
},
"title": "Linguistically-Motivated Grammar Extraction, Generalization and Adaptation",
"authors": [
{
"first": "Yu-Ming",
"middle": [],
"last": "Hsieh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"settlement": "Taipei"
}
},
"email": ""
},
{
"first": "Duen-Chi",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"settlement": "Taipei"
}
},
"email": ""
},
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"settlement": "Taipei"
}
},
"email": "kchen@iis.sinica.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In order to obtain a high precision and high coverage grammar, we proposed a model to measure grammar coverage and designed a PCFG parser to measure efficiency of the grammar. To generalize grammars, a grammar binarization method was proposed to increase the coverage of a probabilistic contextfree grammar. In the mean time linguistically-motivated feature constraints were added into grammar rules to maintain precision of the grammar. The generalized grammar increases grammar coverage from 93% to 99% and bracketing F-score from 87% to 91% in parsing Chinese sentences. To cope with error propagations due to word segmentation and part-of-speech tagging errors, we also proposed a grammar blending method to adapt to such errors. The blended grammar can reduce about 20~30% of parsing errors due to error assignment of pos made by a word segmentation system.",
"pdf_parse": {
"paper_id": "I05-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "In order to obtain a high precision and high coverage grammar, we proposed a model to measure grammar coverage and designed a PCFG parser to measure efficiency of the grammar. To generalize grammars, a grammar binarization method was proposed to increase the coverage of a probabilistic contextfree grammar. In the mean time linguistically-motivated feature constraints were added into grammar rules to maintain precision of the grammar. The generalized grammar increases grammar coverage from 93% to 99% and bracketing F-score from 87% to 91% in parsing Chinese sentences. To cope with error propagations due to word segmentation and part-of-speech tagging errors, we also proposed a grammar blending method to adapt to such errors. The blended grammar can reduce about 20~30% of parsing errors due to error assignment of pos made by a word segmentation system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Treebanks provide instances of phrasal structures and their statistical distributions. However none of treebanks provide sufficient amount of samples which cover all types of phrasal structures, in particular, for the languages without inflectional markers, such as Chinese. It results that grammars directly extracted from treebanks suffer low coverage rate and low precision [7] . However arbitrarily generalizing applicable rule patterns may cause over-generation and increase ambiguities. It may not improve parsing performance [7] . Therefore a new approach of grammar binarization was proposed in this paper. The binarized grammars were derived from probabilistic context-free grammars (PCFG) by rule binarization. The approach was motivated by the linguistic fact that adjuncts could be arbitrarily occurred or not occurred in a phrase. The binarized grammars have better coverage than the original grammars directly extracted from treebank. However they also suffer problems of over-generation and structure-ambiguity. Contemporary grammar formalisms, such as GPSG, LFG, HPSG, take phrase structure rules as backbone for phrase structure representation and adding feature constraints to eliminate illegal or non-logical structures. In order to achieve higher coverage, the backbone grammar rules (syntactic grammar) are allowed to be over-generation and the feature constraints (semantic grammar for world knowledge) eliminate superfluous structures and increase the precision of grammar representation. Recently, probabilistic preferences for grammar rules were incorporated to resolve structure-ambiguities and had great improvements on parsing performances [2, 6, 10] . Regarding feature constrains, it was shown that contexture information of categories of neighboring nodes, mother nodes, or head words are useful for improving grammar precision and parsing performances [1, 2, 7, 10, 12] . However tradeoffs between grammar coverage and grammar precision are always inevitable. Excessive grammatical constraints will reduce grammar coverage and hence reduce parsing performances. On the other hand, loosely constrained grammars cause structure-ambiguities and also reduce parsing performances. In this paper, we consider grammar optimization in particular for Chinese language. Linguistically-motivated feature constraints were added to the grammar rules and evaluated to maintain both grammar coverage and precision. In section 2, the experimental environments were introduced. Grammar generalization and specialization methods were discussed in section 3. Grammars adapting to pos-tagging errors were discussed in section 4. Conclusions and future researches were stated in the last section.",
"cite_spans": [
{
"start": 377,
"end": 380,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 532,
"end": 535,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 1668,
"end": 1671,
"text": "[2,",
"ref_id": "BIBREF1"
},
{
"start": 1672,
"end": 1674,
"text": "6,",
"ref_id": "BIBREF5"
},
{
"start": 1675,
"end": 1678,
"text": "10]",
"ref_id": "BIBREF9"
},
{
"start": 1884,
"end": 1887,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 1888,
"end": 1890,
"text": "2,",
"ref_id": "BIBREF1"
},
{
"start": 1891,
"end": 1893,
"text": "7,",
"ref_id": "BIBREF6"
},
{
"start": 1894,
"end": 1897,
"text": "10,",
"ref_id": "BIBREF9"
},
{
"start": 1898,
"end": 1901,
"text": "12]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The complete research environment, as shown in the figure 1, comprises of the following five modules and functions. a) Word segmentation module: identify words including out-of-vocabulary word and provide their syntactic categories. b) Grammar construction module: extract and derive (perform rule generalization, specialization and adaptation processes) probabilistic grammars from treebanks. c) PCFG parser: parse input sentences. d) Evaluation module: evaluate performances of parsers and grammars. e) Semantic role assignment module: resolve semantic relations for constituents. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Environments",
"sec_num": "2"
},
{
"text": "Grammars are extracted from Sinica Treebank [4, 5] . Sinica Treebank version 2.0 contains 38,944 tree-structures and 230,979 words. It provides instances of phrasal structures and their statistical distributions. In Sinica Treebank, each sentence is annotated with its syntactic structure and semantic roles for constituents in a dependency framework. Since the Treebank cannot provide sufficient amount of samples which cover all types of phrasal structures, it results that grammars directly extracted from treebanks suffer low coverage rate [5] . Therefore grammar generalization and specialization processes are carried out to obtain grammars with better coverage and precision. The detail processes will be discussed in section 3.",
"cite_spans": [
{
"start": 44,
"end": 47,
"text": "[4,",
"ref_id": "BIBREF3"
},
{
"start": 48,
"end": 50,
"text": "5]",
"ref_id": "BIBREF4"
},
{
"start": 544,
"end": 547,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar Extraction Module",
"sec_num": "2.1"
},
{
"text": "The probabilistic context-free parsing strategies were used as our parsing model [2, 6, 8] . Calculating probabilities of rules from a treebank is straightforward and we use maximum likelihood estimation to estimate the rule probabilities, as in [2] . The parser adopts an Earley's Algorithm [8] . It is a top-down left-to-right algorithm. The results of binary structures will be normalized into a regular phrase structures by removing intermediate nodes, if used grammars are binarized grammars. Grammar efficiency will be evaluated according to its parsing performance.",
"cite_spans": [
{
"start": 81,
"end": 84,
"text": "[2,",
"ref_id": "BIBREF1"
},
{
"start": 85,
"end": 87,
"text": "6,",
"ref_id": "BIBREF5"
},
{
"start": 88,
"end": 90,
"text": "8]",
"ref_id": "BIBREF7"
},
{
"start": 246,
"end": 249,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 292,
"end": 295,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PCFG Parser and Grammar Performance Evaluation",
"sec_num": "2.2"
},
{
"text": "Three sets of testing data were used in our performance evaluation. Their basic statistics are shown in Table 1 . Each set of testing data represents easy, hard and moderate respectively. The token coverage of a set of rules is the ceiling of parsing algorithm to achieve. Tradeoff effects between grammar coverage and parsing F-score can be examined for each set of rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiments and Performance Evaluation",
"sec_num": "2.3"
},
{
"text": "By using above mentioned research environment, we intend to find out most effective grammar generalization method and specialization features for Chinese language. To extend an existing or extracted grammar, there are several different approaches. A na\u00efve approach is to generalize a fine-grained rule to a coarse-grained rule. The approach does not generate new patterns. Only the applicable patterns for each word were increased. However it was shown that arbitrarily increasing the applicable rule patterns does increase the coverage rates of grammars, but degrade parsing performance [5] . A better approach is to generalizing and specializing rules under linguistically-motivated way.",
"cite_spans": [
{
"start": 588,
"end": 591,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar Generalization and Specialization",
"sec_num": "3"
},
{
"text": "The length of a phrase in Treebank is variable and usually long phrases suffer from low probability. Therefore most PCFG approaches adopt the binary equivalence grammar, such as Chomsky normal form (CNF). For instance, a grammar rule of S NP Pp Adv V can be replaced by the set of equivalent rules of {S Np R0, R0 Pp R1, R1 Adv V}. The binarization method proposed in our system is different from CNF. It generalizes the original grammar to broader coverage. For instance, the above rule after performing right-association binarization 1 will produce following three binary rules {S Np S', S' Pp S', S' Adv V}. It results that constituents (adjuncts and arguments) can be occurred or not occurred at almost any place in the phrase. It partially fulfilled the linguistic fact that adjuncts in a phrase are arbitrarily occurred. However it also violated the fact that arguments do not arbitrarily occur. Experimental results of the Sinica testing data showed that the grammar token coverage increased from 92.8% to 99.4%, but the labeling F-score dropped from 82.43% to 82.11% [7] . Therefore feature constraints were added into binary rules to limit over-generation caused by recursively adding constituents into intermediate-phrase types, such as S' at above example.",
"cite_spans": [
{
"start": 1075,
"end": 1078,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Grammar Generation, Generalization, and Specialization",
"sec_num": "3.1"
},
{
"text": "Feature attached rules will look like following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Grammar Generation, Generalization, and Specialization",
"sec_num": "3.1"
},
{
"text": "S' -left:Adv-head:V Adv V; S' -left:Pp-head:V Pp S' -left:Adv-head:V ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Grammar Generation, Generalization, and Specialization",
"sec_num": "3.1"
},
{
"text": "The intermediated node S' -left:Pp-head:V says that it is a partial S structure with leftmost constituent Pp and a phrasal head V. Here the leftmost feature constraints linear order of constituents and the head feature implies that the structure patterns are head word dependent. Both constraints are linguistically plausible. Another advantage of the feature-constraint binary grammar is that in addition to rule probability it is easy to implement association strength of modifier word and head word to evaluate plausibility of derived structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Grammar Generation, Generalization, and Specialization",
"sec_num": "3.1"
},
{
"text": "Adding feature constraints into grammar rules attempts to increase precision of grammar representation. However the side-effect is that it also reduces grammar coverage. Therefore grammar design is balanced between its precision and coverage. We are looking for a grammar with highest coverage and precision. The tradeoff depends on the ambiguity resolution power of adopted parser. If the ambiguity resolution power of adopted parser is strong and robust, the grammar coverage might be more important than grammar precision. On the other hand a weak parser had better to use grammars with more feature constraints. In our experiments, we consider grammars suited for PCFG parsing. The follows are some of the most important linguisticallymotivated features which have been tested. Each set of feature constraint added grammar is tested and evaluated. Table 2 shows the experimental results. Since all features have their own linguistic motivations, the result feature constrained grammars maintain high coverage and have improving grammar precision. Therefore each feature more or less improves the parsing performance and the feature of leftmost daughter node, which constrains the linear order of constituents, is the most effective feature. The Left-constraint-added grammar reduces grammar token-coverage very little and significantly increases label and bracket f-scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 852,
"end": 859,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Feature Constraints for Reducing Ambiguities of Generalized Grammars",
"sec_num": "3.2"
},
{
"text": "It is shown that all linguistically-motivated features are more or less effective. The leftmost constitute feature, which constraints linear order of constituents, is the most effective feature. The mother-node feature is the least effective feature, since syntactic structures do not vary too much for each phrase type while playing different grammatical functions in Chinese. Since all the above features are effective, we like to see the results of multi-feature combinations. Many different feature combinations were tested. The experimental results show that none of the feature combinations outperform the binary grammars with Left and Head1/0 features, even the grammar combining all features, as shown in the Table 3 and 4. Here LF-1 and BF-1 measure the label and bracket f-scores only on the sentences with parsing results (i.e. sentences failed of producing parsing results are ignored). The results show that grammar with all feature constraints has better LF-1 and BF-1 scores, since the grammar has higher precision. However the total performances, i.e. Lf and BF scores, are not better than the simpler grammar with feature constraints of Left and Head1/0, since the higher precision grammar losses slight edge on the grammar coverage. The result clearly shows that tradeoffs do exist between grammar precision and coverage. It also suggests that if a feature constraint can improve grammar precision a lot but also reduce grammar coverage a lot, it is better to treat such feature constraints as a soft constraint instead of hard constraint. Probabilistic preference for such feature parameters will be a possible implementation of soft constraint.",
"cite_spans": [],
"ref_spans": [
{
"start": 717,
"end": 724,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Constraints for Reducing Ambiguities of Generalized Grammars",
"sec_num": "3.2"
},
{
"text": "Feature constraints impose additional constraints between constituents for phrase structures. However different feature constraints serve for different functions and have different feature assignment principles. Some features serve for local constraints, such as Left, Head, and Head0/1. Those features are only assigned at local intermediate nodes. Some features are designed for external effect such as Mother Feature, which is assigned to phrase nodes and their daughter intermediate nodes. For instances, NP structures for subject usually are different from NP structures for object in English sentences [10] . NP attached with Mother-feature can make the difference. NP S rules and NP VP rules will be derived each respectively from subject NP and object NP structures. However such difference seems not very significant in Chinese. Therefore feature selection and assignment should be linguistically-motivated as shown in our experiments.",
"cite_spans": [
{
"start": 608,
"end": 612,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "3.3"
},
{
"text": "In conclusion, linguistically-motivated features have better effects on parsing performances than arbitrarily selected features, since they increase grammar precision, but only reduce grammar coverage slightly. The feature of leftmost daughter, which constraints linear order of constituents, is the most effective feature for parsing. Other sub-categorization related features, such as mother node and head features, do not contribute parsing F-scores very much. Such features might be useful for purpose of sentence generation instead of parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "3.3"
},
{
"text": "Perfect testing data was used for the above experiments without considering word segmentation and pos tagging errors. However in real life word segmentation and pos tagging errors will degenerate parsing performances. The real parsing performances of accepting input from automatic word segmentation and pos tagging system are shown in the Table 5 . The na\u00efve approach to overcome the pos tagging errors was to delay some of the ambiguous pos resolution for words with lower confidence tagging scores and leave parser to resolve the ambiguous pos until parsing stage. The tagging confidence of each word is measured by the following value. ",
"cite_spans": [],
"ref_spans": [
{
"start": 340,
"end": 347,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Adapt to Pos Errors Due to Automatic Pos Tagging",
"sec_num": "4"
},
{
"text": ", where P(c 1,w ) and P(c 2,w ) are probabilities assigned by the tagging model for the best candidate c 1,w and the second best candidate c 2,w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "+",
"sec_num": null
},
{
"text": "The experimental results, Table 6 , show that delaying ambiguous pos resolution does not improve parsing performances, since pos ambiguities increase structure ambiguities and the parser is not robust enough to select the best tagging sequence. The higher confidence values mean that more words with lower confidence tagging will leave ambiguous pos tags and the results show the worse performances. Charniak et al [3] experimented with using multiple tags per word as input to a treebank parser, and came to a similar conclusion. ",
"cite_spans": [
{
"start": 415,
"end": 418,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 6",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "+",
"sec_num": null
},
{
"text": "A new approach of grammar blending method was proposed to cope with pos tagging errors. The idea is to blend the original grammar with a newly extracted grammar derived from the Treebank in which pos categories are tagged by the automatic pos tagger. The blended grammars contain the original rules and the extended rules due to pos tagging errors. A 5-fold cross-validation was applied on the testing data to tune the blending weight between the original grammar and the error-adapted grammar. The experimental results show that the blended grammar of weights 8:2 between the original grammar and error-adapted grammar achieves the best results. It reduces about 20%~30% parsing errors due to pos tagging errors, shown in the Table 7 . The pure error-adapted grammar, i.e. 0:10 blending weight, does not improve the parsing performance very much ",
"cite_spans": [],
"ref_spans": [
{
"start": 727,
"end": 734,
"text": "Table 7",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Blending Grammars",
"sec_num": "4.1"
},
{
"text": "In order to obtain a high precision and high coverage grammar, we proposed a model to measure grammar coverage and designed a PCFG parser to measure efficiency of the grammar. Grammar binarization method was proposed to generalize rules and to increase the coverage of context-free grammars. Linguistically-motivated feature constraints were added into grammar rules to maintain grammar rule precision. It is shown that the feature of leftmost daughter, which constraints linear order of constituents, is the most effective feature. Other sub-categorization related features, such as mother node and head features, do not contribute parsing F-scores very much. Such features might be very useful for purpose of sentence generation instead of parsing. The best performed feature constraint binarized grammar increases the grammar coverage of the original grammar from 93% to 99% and bracketing F-score from 87% to 91% in parsing moderate hard testing data. To cope with error propagations due to word segmentation and part-of-speech tagging errors, a grammar blending method was proposed to adapt to such errors. The blended grammar can reduce about 20~30% of parsing errors due to error assignment of a pos tagging system. In the future, we will study more effective way to resolve structure ambiguities. In particular, consider the tradeoff effect between grammar coverage and precision. The balance between soft constraints and hard constraints will be focus of our future researches. In addition to rule probability, word association probability will be another preference measure to resolve structure ambiguity, in particular for conjunctive structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Researches",
"sec_num": "5"
},
{
"text": "The reason for using right-association binarization instead of left-association or head-first association binarization is that our parsing process is from left to right. It turns out that parsing speed of right associated grammars is much faster than left-associated grammars for leftto-right parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by National Science Council under a Center Excellence Grant NSC 93-2752-E-001-001-PAE and National Digital Archives Program Grant NSC93-2422-H-001-0004.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Context-sensitive statistics for improved grammatical language models",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 12th National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "742--747",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak, and G. Carroll, \"Context-sensitive statistics for improved grammatical lan- guage models.\" In Proceedings of the 12th National Conference on Artificial Intelligence, AAAI Press, pp. 742-747, Seattle, WA, 1994,",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Treebank grammars",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Thirteenth National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1031--1036",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak, \"Treebank grammars.\" In Proceedings of the Thirteenth National Conference on Artificial Intelligence, pp. 1031-1036. AAAI Press/MIT Press, 1996.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Taggers for Parsers",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Adcock",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Cassanda",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Gotoh",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Littman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mccann",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "85",
"issue": "",
"pages": "1--2",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak, and G. Carroll, J. Adcock, A. Cassanda, Y. Gotoh, J. Katz, M. Littman, J. Mccann, \"Taggers for Parsers\", Artificial Intelligence, vol. 85, num. 1-2, 1996.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sinica Treebank",
"authors": [
{
"first": "Feng-Yi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Pi-Fang",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chu-Ren",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics and Chinese Language Processing",
"volume": "4",
"issue": "",
"pages": "87--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng-Yi Chen, Pi-Fang Tsai, Keh-Jiann Chen, and Huang, Chu-Ren, \"Sinica Treebank.\" Computational Linguistics and Chinese Language Processing, 4(2):87-103, 2000.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Chinese Treebanks and Grammar Extraction",
"authors": [
{
"first": "Yu-Ming",
"middle": [],
"last": "Keh-Jiann Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hsieh",
"suffix": ""
}
],
"year": 2004,
"venue": "the First International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keh-Jiann Chen and, Yu-Ming Hsieh, \"Chinese Treebanks and Grammar Extraction.\" the First International Joint Conference on Natural Language Processing (IJCNLP-04), March 2004.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Head-Driven Statistical Models for Natural Language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins, \"Head-Driven Statistical Models for Natural Language parsing.\" Ph.D. thesis, Univ. of Pennsylvania, 1999.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Grammar extraction, generalization and specialization",
"authors": [
{
"first": "Yu-Ming",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Duen-Chi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ROCLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu-Ming Hsieh, Duen-Chi Yang and Keh-Jiann Chen, \"Grammar extraction, generaliza- tion and specialization. ( in Chinese)\"Proceedings of ROCLING 2004.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schutze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning and Hinrich Schutze, \"Foundations of Statistical Natural Lan- guage Processing.\" the MIT Press, Cambridge, Massachusetts, 1999.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "PCFG models of linguistic tree representations",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "",
"pages": "613--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, \"PCFG models of linguistic tree representations.\" Computational Linguis- tics, Vol.24, pp.613-632, 1998.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Accurate Unlexicalized Parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceeding of the 4lst Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning, \"Accurate Unlexicalized Parsing.\" Proceeding of the 4lst Annual Meeting of the Association for Computational Linguistics, pp. 423-430, July 2003.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Shallow Semantic Parsing of Chinese",
"authors": [
{
"first": "Honglin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Honglin Sun and Daniel Jurafsky, \"Shallow Semantic Parsing of Chinese.\" Proceedings of NAACL 2004.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Statistical Chinese Parser ICTPROP",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Bai",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang, Qun Liu, Kevin Zhang, Gang Zou and Shuo Bai, \"Statistical Chinese Parser ICTPROP.\" Technology Report, Institute of Computing Technology, 2003.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The system diagram of CKIP parsing environment",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "-si jian qiu. \"He asked Lisi to pick up the ball.\" Tree-structure: S(agent:NP(Head:Nh:\u4ed6)|Head:VF:\u53eb|goal:NP(Head:Nb:\u674e\u674e)|theme:VP(Head:VC:\u64bf| goal:NP(Head:Na:\u7403))) A sample tree-structure",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "Head (Head feature): Pos of phrasal head will propagate to all intermediate nodes within the constituent. Example:S(NP(Head:Nh:\u4ed6)|S' -VF (Head:VF:\u53eb|S' -VF (NP(Head:Nb:\u674e\u56db)| VP(Head:VC:\u64bf| NP(Head:Na:\u7403))))) Linguistic motivations: Constrain sub-categorization frame.Left (Leftmost feature): The pos of the leftmost constitute will propagate one-level to its intermediate mother-node only. Example:S(NP(Head:Nh:\u4ed6)|S' -Head:VF (Head:VF:\u53eb|S' -NP (NP(Head:Nb:\u674e\u56db)| VP(Head:VC:\u64bf| NP(Head:Na:\u7403))))) Linguistic motivation: Constraint linear order of constituents. Mother (Mother-node): The pos of mother-node assigns to all daughter nodes. Example:S(NP -S (Head:Nh:\u4ed6)|S'(Head:VF:\u53eb|S'(NP -S (Head:Nb:\u674e\u56db)|VP -S (Head:VC: \u64bf | NP -VP (Head:Na: \u7403 ))))) Linguistic motivation: Constraint syntactic structures for daughter nodes. Head0/1 (Existence of phrasal head): If phrasal head exists in intermediate node, the nodes will be marked with feature 1; otherwise 0. Example:S(NP(Head:Nh: \u4ed6 )|S' -1 (Head:VF: \u53eb |S' -0 (NP(Head:Nb: motivation: Enforce unique phrasal head in each phrase.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"content": "<table><tr><td>Testing data</td><td>Sources</td><td>hardness</td><td># of short sentence (1-5 words)</td><td># of normal sentences (6-10 words)</td><td># of long sentences (&gt;11 words)</td><td>Total sentences</td></tr><tr><td>Sinica</td><td colspan=\"3\">Balanced corpus moderate 612</td><td>385</td><td>124</td><td>1,121</td></tr><tr><td>Sinorama</td><td>Magazine</td><td>harder</td><td>428</td><td>424</td><td>104</td><td>956</td></tr><tr><td>Textbook</td><td colspan=\"2\">Elementary school easy</td><td>1,159</td><td>566</td><td>25</td><td>1,750</td></tr></table>",
"text": "Three sets of testing data were used in our experiments",
"type_str": "table",
"num": null,
"html": null
},
"TABREF1": {
"content": "<table><tr><td/><td colspan=\"3\">(a)Binary rules without features</td><td colspan=\"2\">(b)Binary+Left</td><td/></tr><tr><td/><td>Sinica</td><td>Snorama</td><td>Textbook</td><td>Sinica</td><td colspan=\"2\">Sinorama Textbook</td></tr><tr><td colspan=\"3\">RC-Type 95.632 94.026</td><td>94.479</td><td>95.074</td><td>93.823</td><td>94.464</td></tr><tr><td colspan=\"3\">RC-Token 99.422 99.139</td><td>99.417</td><td>99.012</td><td>98.756</td><td>99.179</td></tr><tr><td>LP</td><td>81.51</td><td>77.45</td><td>84.42</td><td>86.27</td><td>80.28</td><td>86.67</td></tr><tr><td>LR</td><td>82.73</td><td>77.03</td><td>85.09</td><td>86.18</td><td>80.00</td><td>87.23</td></tr><tr><td>LF</td><td>82.11</td><td>77.24</td><td>84.75</td><td>86.22</td><td>80.14</td><td>86.94</td></tr><tr><td>BP</td><td>87.73</td><td>85.31</td><td>89.66</td><td>90.43</td><td>86.71</td><td>90.84</td></tr><tr><td>BR</td><td>89.16</td><td>84.91</td><td>90.52</td><td>90.46</td><td>86.41</td><td>91.57</td></tr><tr><td>BF</td><td>88.44</td><td>85.11</td><td>90.09</td><td>90.45</td><td>86.56</td><td>91.20</td></tr><tr><td/><td colspan=\"2\">(c)Binary+Head</td><td/><td colspan=\"2\">(d)Binary+Mother</td><td/></tr><tr><td/><td>Sinica</td><td>Snorama</td><td>Textbook</td><td>Sinica</td><td colspan=\"2\">Sinorama Textbook</td></tr><tr><td colspan=\"3\">RC-Type 94.595 93.474</td><td>94.480</td><td>94.737</td><td>94.082</td><td>92.985</td></tr><tr><td colspan=\"3\">RC-Token 98.919 98.740</td><td>99.215</td><td>98.919</td><td>98.628</td><td>98.857</td></tr><tr><td>LP</td><td>83.68</td><td>77.96</td><td>85.52</td><td>81.87</td><td>78.00</td><td>83.77</td></tr><tr><td>LR</td><td>83.75</td><td>77.83</td><td>86.10</td><td>82.83</td><td>76.95</td><td>84.58</td></tr><tr><td>LF</td><td>83.71</td><td>77.90</td><td>85.81</td><td>82.35</td><td>77.47</td><td>84.17</td></tr><tr><td>BP</td><td>89.49</td><td>85.29</td><td>90.17</td><td>87.85</td><td>85.44</td><td>88.47</td></tr><tr><td>BR</td><td>89.59</td><td>85.15</td><td>90.91</td><td>88.84</td><td>84.66</td><td>89.57</td></tr><tr><td>BF</td><td>89.54</td><td>85.22</td><td>90.54</td><td>88.34</td><td>85.05</td><td>89.01</td></tr></table>",
"text": "Performance evaluations for different features",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table><tr><td/><td colspan=\"3\">(a) Binary+Left+Head1/0</td><td colspan=\"2\">(b) Binary+Left+Head</td><td/></tr><tr><td/><td>Sinica</td><td>Sinorama</td><td>Textbook</td><td>Sinica</td><td>Sinorama</td><td>Textbook</td></tr><tr><td colspan=\"3\">RC-Type 94.887 93.745</td><td>94.381</td><td>92.879</td><td>91.853</td><td>92.324</td></tr><tr><td colspan=\"3\">RC-Token 98.975 98.740</td><td>99.167</td><td>98.173</td><td>98.022</td><td>98.608</td></tr><tr><td>LF</td><td>86.54</td><td>79.81</td><td>87.68</td><td>86.00</td><td>79.53</td><td>86.86</td></tr><tr><td>BF</td><td>90.69</td><td>86.16</td><td>91.39</td><td>90.10</td><td>86.06</td><td>90.91</td></tr><tr><td>LF-1</td><td>86.71</td><td>79.98</td><td>87.73</td><td>86.76</td><td>79.86</td><td>87.16</td></tr><tr><td>BF-1</td><td>90.86</td><td>86.34</td><td>91.45</td><td>90.89</td><td>86.42</td><td>91.22</td></tr><tr><td/><td/><td colspan=\"4\">Binary+Left+Head+Mother+Head1/0</td><td/></tr><tr><td/><td/><td colspan=\"2\">Sinica</td><td>Sinorama</td><td>Textbook</td><td/></tr><tr><td/><td>RC-Type</td><td colspan=\"2\">90.709</td><td>90.460</td><td>90.538</td><td/></tr><tr><td/><td>RC-Token</td><td colspan=\"2\">96.906</td><td>96.698</td><td>97.643</td><td/></tr><tr><td/><td>LF</td><td colspan=\"2\">86.75</td><td>78.38</td><td>86.19</td><td/></tr><tr><td/><td>BF</td><td colspan=\"2\">90.54</td><td>85.20</td><td>90.07</td><td/></tr><tr><td/><td>LF-1</td><td colspan=\"2\">88.56</td><td>79.55</td><td>87.84</td><td/></tr><tr><td/><td>BF-1</td><td colspan=\"2\">92.44</td><td>86.46</td><td>91.80</td><td/></tr></table>",
"text": "Performances of grammars with different feature combinations Performances of the grammar with most feature constraints",
"type_str": "table",
"num": null,
"html": null
},
"TABREF3": {
"content": "<table><tr><td>pos tagging</td><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Binary+Left+Head1/0</td><td/></tr><tr><td/><td>Sinica</td><td>Sinorama</td><td>Textbook</td></tr><tr><td>LF</td><td>76.18</td><td>64.53</td><td>73.61</td></tr><tr><td>BF</td><td>84.01</td><td>75.95</td><td>84.28</td></tr></table>",
"text": "Parsing performances of inputs produced by the automatic word segmentation and",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table><tr><td/><td colspan=\"2\">Confidence value=0.5</td><td/></tr><tr><td/><td colspan=\"3\">Sinica Sinorama Textbook</td></tr><tr><td>LF</td><td>75.92</td><td>64.14</td><td>74.66</td></tr><tr><td>BF</td><td>83.48</td><td>75.22</td><td>83.65</td></tr><tr><td/><td colspan=\"2\">Confidence value=0.8</td><td/></tr><tr><td/><td colspan=\"3\">Sinica Sinorama Textbook</td></tr><tr><td>LF</td><td>75.37</td><td>63.17</td><td>73.76</td></tr><tr><td>BF</td><td>83.32</td><td>74.50</td><td>83.33</td></tr><tr><td/><td colspan=\"2\">Confidence value=1.0</td><td/></tr><tr><td/><td colspan=\"3\">Sinica Sinorama Textbook</td></tr><tr><td>LF</td><td>74.12</td><td>61.25</td><td>69.44</td></tr><tr><td>BF</td><td>82.57</td><td>73.17</td><td>81.17</td></tr></table>",
"text": "Parsing performances for different confidence level of pos ambiguities",
"type_str": "table",
"num": null,
"html": null
},
"TABREF5": {
"content": "<table><tr><td/><td colspan=\"2\">Error-adapted</td><td>grammar</td><td>i.e.</td><td colspan=\"2\">Blending weight 8:2</td></tr><tr><td/><td colspan=\"3\">blending weight (0:10)</td><td/><td/><td/></tr><tr><td/><td>Sinica</td><td colspan=\"4\">Sinirama Textbook Sinica</td><td colspan=\"2\">Sinirama Textbook</td></tr><tr><td>LF</td><td>75.99</td><td>66.16</td><td>71.92</td><td/><td>78.04</td><td>66.49</td><td>74.69</td></tr><tr><td>BF</td><td>85.65</td><td>77.89</td><td>85.04</td><td/><td>86.06</td><td>77.82</td><td>85.91</td></tr></table>",
"text": "Performances of the blended grammars",
"type_str": "table",
"num": null,
"html": null
}
}
}
}