{ "paper_id": "I05-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:26:28.282572Z" }, "title": "Automatic Partial Parsing Rule Acquisition Using Decision Tree Induction", "authors": [ { "first": "Myung-Seok", "middle": [], "last": "Choi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Korea Advanced Institute of Science and Technology", "location": { "addrLine": "373-1 Guseong-dong, Yuseong-gu", "postCode": "305-701", "settlement": "Daejeon", "country": "Republic of Korea" } }, "email": "mschoi@kaist.ac.kr" }, { "first": "Su", "middle": [], "last": "Lim", "suffix": "", "affiliation": { "laboratory": "", "institution": "Korea Advanced Institute of Science and Technology", "location": { "addrLine": "373-1 Guseong-dong, Yuseong-gu", "postCode": "305-701", "settlement": "Daejeon", "country": "Republic of Korea" } }, "email": "" }, { "first": "Key-Sun", "middle": [], "last": "Choi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Korea Advanced Institute of Science and Technology", "location": { "addrLine": "373-1 Guseong-dong, Yuseong-gu", "postCode": "305-701", "settlement": "Daejeon", "country": "Republic of Korea" } }, "email": "kschoi@cs.kaist.ac.kr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Partial parsing techniques try to recover syntactic information efficiently and reliably by sacrificing completeness and depth of analysis. One of the difficulties of partial parsing is finding a means to extract the grammar involved automatically. In this paper, we present a method for automatically extracting partial parsing rules from a tree-annotated corpus using decision tree induction. We define the partial parsing rules as those that can decide the structure of a substring in an input sentence deterministically. This decision can be considered as a classification; as such, for a substring in an input sentence, a proper structure is chosen among the structures occurred in the corpus. For the classification, we use decision tree induction, and induce partial parsing rules from the decision tree. The acquired grammar is similar to a phrase structure grammar, with contextual and lexical information, but it allows building structures of depth one or more. Our experiments showed that the proposed partial parser using the automatically extracted rules is not only accurate and efficient, but also achieves reasonable coverage for Korean.", "pdf_parse": { "paper_id": "I05-1013", "_pdf_hash": "", "abstract": [ { "text": "Partial parsing techniques try to recover syntactic information efficiently and reliably by sacrificing completeness and depth of analysis. One of the difficulties of partial parsing is finding a means to extract the grammar involved automatically. In this paper, we present a method for automatically extracting partial parsing rules from a tree-annotated corpus using decision tree induction. We define the partial parsing rules as those that can decide the structure of a substring in an input sentence deterministically. This decision can be considered as a classification; as such, for a substring in an input sentence, a proper structure is chosen among the structures occurred in the corpus. For the classification, we use decision tree induction, and induce partial parsing rules from the decision tree. The acquired grammar is similar to a phrase structure grammar, with contextual and lexical information, but it allows building structures of depth one or more. Our experiments showed that the proposed partial parser using the automatically extracted rules is not only accurate and efficient, but also achieves reasonable coverage for Korean.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Conventional parsers try to identify syntactic information completely. These parsers encounter difficulties when processing unrestricted texts, because of ungrammatical sentences, the unavoidable incompleteness of lexicon and grammar, and other reasons like long sentences. Partial parsing is an alternative technique developed in response to these problems. This technique aims to recover syntactic information efficiently and reliably from unrestricted texts by sacrificing completeness and depth of analysis, and relying on local information to resolve ambiguities [1] .", "cite_spans": [ { "start": 568, "end": 571, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Partial parsing techniques can be roughly classified into two groups. The first group of techniques involves partial parsing via finite state machines [2, 3, 9, 10] . These approaches apply the sequential regular expression recognizer to an input sentence. When multiple rules match an input string at a given position, the longest-matching rule is selected. Therefore, these parsers always produce a single best analysis and operate very fast. In general, these approaches use a hand-written regular grammar. As would be expected, manually writing a grammar is both very time consuming and prone to have inconsistencies.", "cite_spans": [ { "start": 151, "end": 154, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 155, "end": 157, "text": "3,", "ref_id": "BIBREF2" }, { "start": 158, "end": 160, "text": "9,", "ref_id": "BIBREF8" }, { "start": 161, "end": 164, "text": "10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The other group of partial parsing techniques is text chunking, that is, recognition of non-overlapping and non-recursive cores of major phrases (chunks), by using machine learning techniques [4, 7, 8, 13, 15, 17] . Since Ramshaw and Marcus [15] first proposed formulating the chunking task as a tagging task, most chunking methods have followed this word-tagging approach. In base noun phrase chunking, for instance, each word is marked with one of three chunk tags: I (for a word inside an NP), O (for outside of an NP), and B (for between the end of one NP and the start of another) as follows 1 :", "cite_spans": [ { "start": 192, "end": 195, "text": "[4,", "ref_id": "BIBREF3" }, { "start": 196, "end": 198, "text": "7,", "ref_id": "BIBREF6" }, { "start": 199, "end": 201, "text": "8,", "ref_id": "BIBREF7" }, { "start": 202, "end": 205, "text": "13,", "ref_id": "BIBREF12" }, { "start": 206, "end": 209, "text": "15,", "ref_id": "BIBREF14" }, { "start": 210, "end": 213, "text": "17]", "ref_id": "BIBREF16" }, { "start": 241, "end": 245, "text": "[15]", "ref_id": "BIBREF14" }, { "start": 597, "end": 598, "text": "1", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In ( early trading ) in ( Hong Kong ) ( Monday ), ( gold ) was quoted at ( $ 366.50 ) ( an ounce With respect to these approaches, there have been several studies on automatically extracting chunking rules from large-scale corpora using transformationbased learning [15] , error-driven pruning [7] , the ALLiS top-down inductive system [8] . However, it is not yet clear how these approaches could be extended beyond the chunking task.", "cite_spans": [ { "start": 266, "end": 270, "text": "[15]", "ref_id": "BIBREF14" }, { "start": 294, "end": 297, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 336, "end": 339, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a method of automatically extracting partial parsing rules from a tree-annotated corpus using the decision tree method. Our goal is to extract rules with higher accuracy and broader coverage. We define the partial parsing rules as those that can establish the structure of a substring in an input sentence deterministically. This decision can be considered as a classification; as such, for a substring in an input sentence, a proper structure is chosen among the structures occurred in the corpus, as extended from the word-tagging approach of text chunking. For the classification, we use decision tree induction with features of contextual and lexical information. In addition, we use negative evidence, as well as positive evidence, to gain higher accuracy. For general recursive phrases, all possible substrings in a parse tree are taken into account by extracting evidence recursively from a parse tree in a training corpus. We induce partial parsing rules from the decision tree, and, to retain only those rules that are accurate, verify each rule through cross-validation. In many cases, several different structures are assigned to the same substring in a tree-annotated corpus. Substrings for coordination and compound nouns are typical examples of such ambiguous cases in Korean. These ambiguities can prevent us from extracting partial parsing rules that cover the substrings with more than one substructure and, consequently, can cause the result of partial parsing to be limited to a relatively shallow depth. In this work, we address this problem by merging substructures with ambiguity using an underspecified representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This underspecification leads to broader coverage without deteriorating either the determinism or the precision of partial parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The acquired grammar is similar to a phrase structure grammar, with contextual and lexical information, but it allows building structures of depth one or more. It is easy to understand; it can be easily modified; and it can be selectively added to or deleted from the grammar. Partial parsing with this grammar processes an input sentence deterministically using longest-match heuristics. The acquired rules are then recursively applied to construct higher structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To start, we define the rule template, the basic format of a partial parsing rule, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Rule Acquisition", "sec_num": "2" }, { "text": "lef t context | substring | right context \u2212\u2192 substructure", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Rule Acquisition", "sec_num": "2" }, { "text": "This template shows how the substring of an input sentence, surrounded by the left context and the right context, constructs the substructure. The left context and the right context are the remainder of an input sentence minus the substring. For automatic learning of the partial parsing rules, the lengths of the left context and the right context are restricted to one respectively. Note that applying a partial parsing rule results in a structure of depth one or more. In other words, the rules extracted by this rule template reduce a substring into a subtree, as opposed to a single non-terminal; hence, the resultant rules can be applied more specifically and strictly. Figure 1 illustrates the procedure for the extraction of partial parsing rules. First, we extract all possible rule candidates from a tree-annotated corpus, compliant with the rule template. The extracted candidates are grouped according to their respective substrings. Next, using the decision tree method, these candidates are enriched with contextual and lexical information. The contextualized and lexicalized rules are verified through cross-validation to retain only those rules that are accurate. The successfully verified accurate rules become the final partial parsing rules. Remaining rules that cannot be verified are forwarded to the tree underspecification step, which merges tree structures with hard ambiguities. As seen in Fig. 1 , the underspecified candidates return to the refinement step. The following subsections describe each step in detail.", "cite_spans": [], "ref_spans": [ { "start": 676, "end": 684, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1415, "end": 1421, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Automatic Rule Acquisition", "sec_num": "2" }, { "text": "From the tree-annotated corpus, we extract all the possible candidates for partial parsing rules in accordance with the rule template. Scanning input sentences annotated with its syntactic structure one by one, we can extract the substructure corresponding to every possible substring at each level of the syntactic structure. We define level 0 as part-of-speech tags in an input sentence, and level n as the nodes whose maximum depth is n. If no structure precisely corresponds to a particular substring, then a null substructure is extracted, which represents negative evidence. Figure 2 shows an example sentence 2 with its syntactic structure 3 and some of the candidates for the partial parsing rules extracted from the left side of the example. In this figure, the first partial parsing rule candidate shows how the substring 'npp' can be constructed into the substructure 'NP'. S null denotes negative evidence.", "cite_spans": [], "ref_spans": [ { "start": 581, "end": 589, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Extracting Candidates", "sec_num": "2.1" }, { "text": "The extracted rule candidates are gathered and grouped according to their respective substrings. Figure 3 4 shows the candidate groups. In this figure, G 1 and G 2 are the group names, and the number in the last column refers to the frequency that each candidate occurs in the training corpus. Group G 1 and G 2 have 2 and 3 candidates, respectively. When a particular group has only one candidate, the candidate can always be applied to a corresponding substring 2 'NOM' refers to the nominative case and 'ACC' refers to the accusative case. The term 'npp' denotes personal pronoun; 'jxt' denotes topicalized auxiliary particle; 'ncn' denotes non-predicative common noun; 'jco' denotes objective case particle; 'pvg' denotes general verb; 'ef' denotes final ending; and 'sf' denotes full stop symbol. For a detailed description of the KAIST corpus and its tagset, refer to Lee [11] . The symbol '+' is not a part-of-speech, but rather a delimiter between words within a word phrase. 3 In Korean, a word phrase, similar to bunsetsu in Japanese, is defined as a spacing unit with one or more content words followed by zero or more functional words. A content word indicates the meaning of the word phrase in a sentence, while a functional word-a particle or a verbal-ending-indicates the grammatical role of the word phrase. In the KAIST corpus used in this paper, a functional word is not included in the non-terminal that the preceding content word belongs to, following the restricted representation of phrase structure grammar for Korean [12] . For example, a word phrase \"na/npp + neun/jxt\" is annotated as \"(NP na/npp ) + neun/jxt\", as in Fig. 2 . deterministically. In contrast, if there is more than one candidate in a particular group, those candidates should be enriched with contextual and lexical information to make each candidate distinct for proper application to a corresponding substring.", "cite_spans": [ { "start": 878, "end": 882, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 984, "end": 985, "text": "3", "ref_id": "BIBREF2" }, { "start": 1541, "end": 1545, "text": "[12]", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 97, "end": 107, "text": "Figure 3 4", "ref_id": null }, { "start": 1644, "end": 1650, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Extracting Candidates", "sec_num": "2.1" }, { "text": "N P N P V P V P V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Candidates", "sec_num": "2.1" }, { "text": "| | n p p + j x t | | n p p + j x t n c n | \u2026 | N P + j x t N P + j c o V P | \u2026 | N P + j c o V P | \u2026 | V P + e f + s f | S n u l l S n u l l N P + j x t N P + j c o V P V P V P N P V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Candidates", "sec_num": "2.1" }, { "text": "This step refines ambiguous candidates with contextual and lexical information to make them unambiguous. First, each candidate needs to be annotated with contextual and lexical information occurring in the training corpus, as shown in Fig. 4 . In this figure, we can see that a substring with lexical information such as 'su/nbn' unambiguously constitutes the substructure 'AUXP'. We use the decision tree method, C4.5 [14] , to select the important contextual and lexical information that can facilitate the establishment of unambiguous partial parsing rules. The features used in the decision tree method are the lexical information of each terminal or Figure 5 shows a section of the decision tree learned from our example substring. The deterministic partial parsing rules in Fig. 6 are extracted from the decision tree. As shown in Fig. 6 , only the lexical entries for the second and the fourth morphemes in the substring are selected as additional lexical information, and none of the contexts is selected in this case. We should note that the rules induced from the decision tree are ordered. Since these ordered rules do not interfere with those from other groups, they can be modified without much difficulty. After we enrich the partial parsing rules using the decision tree method, we verify them by estimating the accuracy of each rule to filter out less deterministic rules. We estimate the error rates (%) of the rule candidates via a 10-fold cross validation on the training corpus. The rule candidates of the group with an error rate that is less than the predefined threshold, \u03b8, can be extracted to the final partial parsing rules. The candidates in the group G 2 in Fig. 3 could not be extracted as the final partial parsing rules, because the estimated error rate of the group was higher than the threshold. The candidates in G 2 are set aside for tree underspecification processing. Using the threshold \u03b8, we can control the number of the final partial parsing rules and the ratio of the precision/recall trade-off for the parser that adopts the extracted partial parsing rules.", "cite_spans": [ { "start": 419, "end": 423, "text": "[14]", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 235, "end": 241, "text": "Fig. 4", "ref_id": null }, { "start": 655, "end": 663, "text": "Figure 5", "ref_id": null }, { "start": 780, "end": 786, "text": "Fig. 6", "ref_id": "FIGREF3" }, { "start": 837, "end": 843, "text": "Fig. 6", "ref_id": "FIGREF3" }, { "start": 1686, "end": 1692, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "Refining Candidates", "sec_num": "2.2" }, { "text": "The group G 2 in Fig. 3 has one of the attachment ambiguities, namely, consecutive subordinate clauses. Figure 7 shows sections of two different trees extracted from a tree-annotated corpus. The two trees have identical substrings, but are analyzed differently. This figure exemplifies how an ambiguity relates to the lexical association between verb phrases, which is difficult to annotate in rules. There are many other syntactic ambiguities, such as coordination and noun phrase bracketing, that are difficult to resolve with local information. The resolution usually requires lexical co-occurrence, global context, or semantics. Such ambiguities can deteriorate the precision of partial parsing or limit the result of partial parsing to a relatively shallow depth. Rule candidates with these ambiguities mostly have several different structures assigned to the same substrings under the same non-terminals. In this paper, we refer to them as internal syntactic ambiguities. We manually examined the patterns of the internal syntactic ambiguities, which were found in the KAIST corpus as they could not be refined automatically due to low estimated accuracies. During the process, we observed that few internal syntactic ambiguities could be resolved with local information.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 23, "text": "Fig. 3", "ref_id": null }, { "start": 104, "end": 112, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Dealing with Hard Ambiguities: The Underspecified Representation", "sec_num": "2.3" }, { "text": "In this paper, we handle internal syntactic ambiguities by merging the candidates using tree intersection and making them underspecified. This underspecified representation enables an analysis with broader coverage, without deterio- rating the determinism or the precision of partial parsing. Since only different structures under the same non-terminal are merged, the underspecification does not harm the structure of higher nodes. Figure 8 shows the underspecified candidates of group G 2 . In this figure, the first two rules in G 2 are reduced to the merged 'VP'. Underspecified candidates are also enriched with contextual and lexical information using the decision tree method, and they are verified through cross-validation, as described in Sect. 2.2. The resolution of internal syntactic ambiguities is forwarded to a module beyond the partial parser. If necessary, by giving all possible structures of underspecified parts, we can prevent a later processing from re-analyzing the parts. Any remaining candidates that are not selected as the partial parsing rules after all three steps are discarded.", "cite_spans": [], "ref_spans": [ { "start": 433, "end": 441, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Dealing with Hard Ambiguities: The Underspecified Representation", "sec_num": "2.3" }, { "text": "We have performed experiments to show the usefulness of automatically extracted partial parsing rules. For our evaluations, we implemented a naive partial parser, using TRIE indexing to search the partial parsing rules. The input of the partial parser is a part-of-speech tagged sentence and the result is usually the sequence of subtrees. At each position in an input sentence, the parser tries to choose a rule group using longest-match heuristics. Then, if any matches are found, the parser applies the first-matching rule in the group to the corresponding substring, because the rules induced from the decision tree are ordered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "3" }, { "text": "In our experiments, we used the KAIST tree-annotated corpus [11] . The training corpus contains 10,869 sentences (289,362 morphemes), with an average length of 26.6 morphemes. The test corpus contains 1,208 sentences, with an average length of 26.0 morphemes. The validation corpus, used for choosing the threshold, \u03b8, contains 1,112 sentences, with an average length of 20.1 morphemes, and is distinct from both the training corpus and the test corpus.", "cite_spans": [ { "start": 60, "end": 64, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "3" }, { "text": "The performance of the partial parser was evaluated using PARSEVAL measures [5] . The F measure, a complement of the E measure [16] , is used to combine precision and recall into a single measure of overall performance, and is defined as follows:", "cite_spans": [ { "start": 76, "end": 79, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 127, "end": 131, "text": "[16]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "3" }, { "text": "F \u03b2 = (\u03b2 2 + 1) * LP * LR \u03b2 2 * LP + LR", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "3" }, { "text": "In the above equation, \u03b2 is a factor that determines the weighting of precision and recall. Thus, \u03b2 < 1 is used to weight precision heavier than recall, \u03b2 > 1 is used to weight recall heavier than precision, and \u03b2 = 1 is used to weight precision and recall equally. The parsing result can be affected by the predefined threshold, \u03b8 (described in Sect. 2.2), which can control both the accuracy of the partial parser and the number of the extracted rules. Table 1 shows the number of the extracted rules and how precision and recall trade off for the validation corpus as the threshold, \u03b8, is varied. As can be seen, a lower threshold, \u03b8, corresponds to a higher precision and a lower recall. A higher threshold corresponds to a lower precision and a higher recall. For a partial parser, the precision is generally favored over the recall. In this paper, we used a value of 11 for \u03b8, where the precision was over 95% and f \u03b2=0.4 was the highest. The value of this threshold should be set according to the requirements of the relevant application. Table 2 presents the precision and the recall of the partial parser for the test corpus when the threshold, \u03b8, was given a value of 11. In the baseline grammar, we selected the most probable structure for a given substring from each group of candidates. The \"depth 1 rule only\" grammar is the set of the rules extracted along with the restriction, stating that only a substructure of depth one is permitted in the rule template. The \"underspecified\" grammar is the final version of our partial parsing rules, and the \"not underspecified\" grammar is the set of the rules extracted without the underspecification processing. Both PCFG and Lee [11] are statistical full parsers of Korean, and Lee enriched the grammar using contextual and lexical information to improve the accuracy of a parser. Both of them were trained and tested on the same corpus as ours was for comparison. The performance of both the \"not underspecified\" grammar and the \"underspecified\" grammar was greatly improved compared to the baseline grammar and PCFG, neither of which adopts contextual and lexical information in their rules. The \"not underspecified\" grammar performed better than the \"depth 1 rule only\" grammar. This indicates that increasing the depth of a rule is helpful in partial parsing, as in the case of a statistical full parsing, Data-Oriented Parsing [6] . Comparing the \"underspecified\" grammar with the \"not underspecified\" grammar, we can see that underspecification leads to broader coverage, that is, higher recall. The precision of the \"underspecified\" grammar was above 95%. In other words, when a parser generates 20 structures, 19 out of 20 structures are correct. However, its recall dropped far beyond that of the statistical full parser [11] . When we set \u03b8 to a value of 26, the underspecified grammar slightly outperformed that of the full parser in terms of f \u03b2=1 , although the proposed partial parser does not always produce one complete parse tree 5 . It follows from what has been said thus far that the proposed parser has the potential to be a high-precision partial parser and approach the performance level of a statistical full parser, depending on the threshold \u03b8.", "cite_spans": [ { "start": 1687, "end": 1691, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 2390, "end": 2393, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 2788, "end": 2792, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 3005, "end": 3006, "text": "5", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 455, "end": 462, "text": "Table 1", "ref_id": "TABREF3" }, { "start": 1046, "end": 1053, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "3" }, { "text": "The current implementation of our parser has a O(n 2 m r ) worst case time complexity for a case involving a skewed binary tree, where n is the length of the input sentence and m r is the number of rules. Because m r is the constant, much more than two elements are reduced to subtrees of depth one or more in each level of parsing, and, differing from full parsing, the number of recursions in the partial parsing seems to be limited 6 , we can parse in near-linear time. Lastly, we manually examined the first 100 or so errors occurring in the test corpus. In spite of underspecification, the errors related to conjunctions and attachments were the most frequent. The errors of conjunctions were mostly caused by substrings not occurring in the training corpus, while the cases of attachments lacked contextual or lexical information for a given substring. These errors can be partly resolved by increasing the size of the corpus, but it seems that they cannot be resolved completely with partial parsing. In addition, there were errors related to noun phrase bracketing, date/time/unit expression, and either incorrectly tagged sentences or inherently ambiguous sentences. For date, time, and unit expressions, manually encoded rules may be effective with partial parsing, since they appear to be used in a regular way. We should note that many unrecognized phrases included expressions not occurring in the training corpus. This is obviously because our grammar cannot handle unseen substrings; hence, alleviating the sparseness in the sequences will be the goal of our future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "3" }, { "text": "In this paper, we have proposed a method of automatically extracting the partial parsing rules from a tree-annotated corpus using a decision tree method. We consider partial parsing as a classification; as such, for a substring in an input sentence, a proper structure is chosen among the structures occurred in the corpus. Highly accurate partial parsing rules can be extracted by (1) allowing rules to construct a subtree of depth one or more; (2) using decision tree induction, with features of contextual and lexical information for the classification; and (3) verifying induced rules through cross-validation. By merging substructures with ambiguity in non-deterministic rules using an underspecified representation, we can handle syntactic ambiguities that are difficult to resolve with local information, such as coordination and noun phrase bracketing ambiguities. Using a threshold, \u03b8, we can control the number of the partial parsing rules and the ratio of the precision/recall trade-off of the partial parser. The value of this threshold should be set according to the requirements of the relevant application. Our experiments showed that the proposed partial parser using the automatically extracted rules is not only accurate and efficient, but also achieves reasonable coverage for Korean.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "This example is excerpted from Tjong Kim Sang[17].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The term 'etm' denotes adnominalizing ending; 'nbn' denotes non-unit bound noun; 'jcs' denotes subjective case particle; 'paa' denotes attributive adjective; 'ecs' denotes subordinate conjunctive ending; and 'AUXP' denotes auxiliary phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the test corpus, the percentage that our partial parser (\u03b8=26) produced one complete parse tree was 70.9%. When \u03b8=11, the percentage was 35.9%.6 In our parser, the maximum number of recursion was 10 and the average number of recursion was 4.47.7 This result was obtained using a Linux machine with Pentium III 700MHz processor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Part-of-speech tagging and partial parsing. Corpus-Based Methods in Language and Speech", "authors": [ { "first": "S", "middle": [ "P" ], "last": "Abney", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abney, S.P.: Part-of-speech tagging and partial parsing. Corpus-Based Methods in Language and Speech. Kluwer Academic Publishers (1996)", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Partial parsing via finite-state cascades", "authors": [ { "first": "S", "middle": [ "P" ], "last": "Abney", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the ESSLLI '96 Robust Parsing Workshop", "volume": "", "issue": "", "pages": "8--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abney, S.P.: Partial parsing via finite-state cascades. Proceedings of the ESSLLI '96 Robust Parsing Workshop (1996) 8-15", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Incremental finite-state parsing", "authors": [ { "first": "S", "middle": [], "last": "A\u00eft-Mokhtar", "suffix": "" }, { "first": "J", "middle": [ "P" ], "last": "Chanod", "suffix": "" } ], "year": 1997, "venue": "Proceedings of Applied Natural Language Processing", "volume": "", "issue": "", "pages": "72--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "A\u00eft-Mokhtar, S., Chanod, J.P.: Incremental finite-state parsing. Proceedings of Applied Natural Language Processing (1997) 72-79", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A memory-based approach to learning shallow natural language patterns", "authors": [ { "first": "S", "middle": [], "last": "Argamon-Engelson", "suffix": "" }, { "first": "I", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Y", "middle": [], "last": "Krymolowski", "suffix": "" } ], "year": 1999, "venue": "Journal of Experimental and Theoretical AI", "volume": "11", "issue": "3", "pages": "369--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Argamon-Engelson, S., Dagan, I., Krymolowski, Y.: A memory-based approach to learning shallow natural language patterns. Journal of Experimental and Theoret- ical AI 11(3) (1999) 369-390", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A procedure for quantitatively comparing the syntactic coverage of English grammars", "authors": [ { "first": "E", "middle": [], "last": "Black", "suffix": "" }, { "first": "S", "middle": [], "last": "Abney", "suffix": "" }, { "first": "D", "middle": [], "last": "Flickenger", "suffix": "" }, { "first": "C", "middle": [], "last": "Gdaniec", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "P", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "D", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "R", "middle": [], "last": "Ingria", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "J", "middle": [], "last": "Klavans", "suffix": "" }, { "first": "M", "middle": [], "last": "Liberman", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "T", "middle": [], "last": "Strzalkowski", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "306--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Black, E., Abney, S., Flickenger, D., Gdaniec, C., Grishman, R., Harrison, P., Hindle, D., Ingria, R., Jelinek, F., Klavans, J., Liberman, M., Marcus, M., Roukos, S., Santorini, B., Strzalkowski, T.: A procedure for quantitatively comparing the syntactic coverage of English grammars. Proceedings of the DARPA Speech and Natural Language Workshop (1991) 306-311", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Enriching Linguistics with Statistics: Performance Models of Natural Language", "authors": [ { "first": "R", "middle": [], "last": "Bod", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bod, R.: Enriching Linguistics with Statistics: Performance Models of Natural Language. Ph.D Thesis. University of Amsterdam (1995)", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Error-driven pruning of treebank grammars for base noun phrase identification", "authors": [ { "first": "C", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "D", "middle": [], "last": "Pierce", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "218--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cardie, C., Pierce, D.: Error-driven pruning of treebank grammars for base noun phrase identification. Proceedings of 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (1998) 218-224", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning rules and their exceptions", "authors": [ { "first": "H", "middle": [], "last": "D\u00e9jean", "suffix": "" } ], "year": 2002, "venue": "Journal of Machine Learning Research", "volume": "2", "issue": "", "pages": "669--693", "other_ids": {}, "num": null, "urls": [], "raw_text": "D\u00e9jean, H.: Learning rules and their exceptions. Journal of Machine Learning Re- search 2 (2002) 669-693", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A parser for text corpora. Computational Approaches to the Lexicon", "authors": [ { "first": "D", "middle": [], "last": "Hindle", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "103--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindle, D.: A parser for text corpora. Computational Approaches to the Lexicon. Oxford University (1995) 103-151", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Fastus: A cascaded finite-state transducer for extracting information from naturallanguage text. Finite-State Language Processing", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Hobbs", "suffix": "" }, { "first": "D", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "J", "middle": [], "last": "Bear", "suffix": "" }, { "first": "D", "middle": [], "last": "Israel", "suffix": "" }, { "first": "M", "middle": [], "last": "Kameyama", "suffix": "" }, { "first": "M", "middle": [], "last": "Stickel", "suffix": "" }, { "first": "M", "middle": [], "last": "Tyson", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "383--406", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hobbs, J.R., Appelt, D., Bear, J., Israel, D., Kameyama, M., Stickel, M., Tyson, M.: Fastus: A cascaded finite-state transducer for extracting information from natural- language text. Finite-State Language Processing. The MIT Press (1997) 383-406", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Probabilistic Parsing of Korean based on Language-Specific Properties", "authors": [ { "first": "K", "middle": [ "J" ], "last": "Lee", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, K.J.: Probabilistic Parsing of Korean based on Language-Specific Properties. Ph.D. Thesis. KAIST, Korea (1998)", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Restricted representation of phrase structure grammar for building a tree annotated corpus of Korean", "authors": [ { "first": "K", "middle": [ "J" ], "last": "Lee", "suffix": "" }, { "first": "G", "middle": [ "C" ], "last": "Kim", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Kim", "suffix": "" }, { "first": "Y", "middle": [ "S" ], "last": "Han", "suffix": "" } ], "year": 1997, "venue": "Natural Language Engineering", "volume": "3", "issue": "2", "pages": "215--230", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, K.J., Kim, G.C., Kim, J.H., Han, Y.S.: Restricted representation of phrase structure grammar for building a tree annotated corpus of Korean. Natural Lan- guage Engineering 3(2) (1997) 215-230", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A learning approach to shallow parsing", "authors": [ { "first": "M", "middle": [], "last": "Mu\u00f1oz", "suffix": "" }, { "first": "V", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" }, { "first": "D", "middle": [], "last": "Zimak", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Copora", "volume": "", "issue": "", "pages": "168--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mu\u00f1oz, M., Punyakanok, V., Roth, D., Zimak, D.: A learning approach to shallow parsing. Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Copora (1999) 168-178", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "C4.5: Programs for Machine Learning", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Quinlan", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publish- ers (1993)", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Text chunking using transformation-based learning", "authors": [ { "first": "L", "middle": [ "A" ], "last": "Ramshaw", "suffix": "" }, { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" } ], "year": 1995, "venue": "Proceedings of Third Wordkshop on Very Large Corpora", "volume": "", "issue": "", "pages": "82--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramshaw, L.A., Marcus, M.P.: Text chunking using transformation-based learning. Proceedings of Third Wordkshop on Very Large Corpora (1995) 82-94", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Information Retrieval", "authors": [ { "first": "C", "middle": [], "last": "Van Rijsbergen", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "van Rijsbergen, C.: Information Retrieval. Buttersworth (1975)", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Memory-based shallow parsing", "authors": [ { "first": "Tjong", "middle": [], "last": "Kim Sang", "suffix": "" }, { "first": "E", "middle": [ "F" ], "last": "", "suffix": "" } ], "year": 2002, "venue": "Journal of Machine Learning Research", "volume": "2", "issue": "", "pages": "559--594", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tjong Kim Sang, E.F.: Memory-based shallow parsing. Journal of Machine Learn- ing Research 2 (2002) 559-594", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Procedure for extracting partial parsing rules", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "An example sentence and the extracted candidates for partial parsing rules |etm nbn + jcs paa | S Groups of partial parsing rules candidates", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "Annotated candidates for the G1 group rules nbn = su(way): paa = iss(exist) paa = eop(not exist) paa = man(much) A section of the decision tree non-terminal for the substring, and the parts-of-speech and lexical information for the left context and the right context. Lexical information of a non-terminal is defined as the part-of-speech and lexical information of its headword.", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": "etm su/nbn + jcs iss/paa | \u2212\u2192 AUXP | etm su/nbn + jcs eop/paa | \u2212\u2192 AUXP | etm su/nbn + jcs man/paa| \u2212\u2192 S null Partial parsing rules extracted from a section of the decision tree in Fig. 5", "num": null, "type_str": "figure" }, "FIGREF4": { "uris": null, "text": "+ seo/ecs go to the land of angels -as jal sarabo + ryeogo/ecs live well -in order to aesseuda + seo/ecs go to the land of angels -as cheonsaui ttange ga + seo/ecs go to the land of angels -as jal sarabo + ryeogo/ecs live well -in order to jal sarabo + ryeogo/ecs live well -in order to aesseuda jibe ga + seo/ecs go home -as TVreul bo + ryeogo/ecs watch TV -in order to gabangeul chaenggida pack one's bag jibe ga + seo/ecs go home -as jibe ga + seo/ecs go home -as TVreul bo + ryeogo/ecs watch TV -in order to TVreul bo + ryeogo/ecs watch TV -in order to gabangeul chaenggida pack one's bag gabangeul chaenggida pack one's bag Examples of internal syntactic ambiguities G Underspecified candidates", "num": null, "type_str": "figure" }, "FIGREF5": { "uris": null, "text": "shows the time spent in parsing as a function of the sentence length 7 .", "num": null, "type_str": "figure" }, "FIGREF6": { "uris": null, "text": "Time spent in parsing", "num": null, "type_str": "figure" }, "TABREF0": { "html": null, "type_str": "table", "text": "). In O early I trading I in O Hong I Kong I Monday B , O gold I was O quoted O at O $ I 366.50 I an B ounce I . O", "content": "", "num": null }, "TABREF2": { "html": null, "type_str": "table", "text": "sal /pvg + | r /etm su/nbn + ga/jcs iss/paa | + da/ef \u2192 AUXP i/jp + | r /etm su/nbn + ga/jcs eop/paa | + da/ef \u2192 AUXP nolla/pvg + | n/etm jeok /nbn + i/jcs iss/paa | + da/ef \u2192 S null wanjeonha/paa + | n/etm geot/nbn + i/jcs eop/paa | + go/ecc \u2192 S null kkeutna/pvg + | n/etm geut/nbn + i/jcs ani /paa | + ra/ecs \u2192 S null ik /pvg + | neun/etm geut/nbn + i/jcs jot/paa | + da/ef \u2192 S null ha/xsv + | r /etm nawi/nbn + ga/jcs eop/paa | + da/ef \u2192 S null", "content": "
", "num": null }, "TABREF3": { "html": null, "type_str": "table", "text": "Precision/Recall with respect to the threshold, \u03b8, for the validation corpus \u03b8 # of rules precision recall F \u03b2=0.4", "content": "
6 18,63895.572.9 91.6
11 20,39595.1 75.1 91.7
16 22,65094.278.0 91.6
21 25,64092.683.3 91.2
26 28,18092.084.7 90.9
Table 2. Experimental results of the partial parser for Korean
Grammarprecision recall F \u03b2=0.4 F \u03b2=1
baseline73.072.0 72.9 72.5
depth 1 rule only95.268.3 90.3 79.6
not underspecified95.771.6 91.4 81.9
underspecified95.7 73.6 91.9 83.2
underspecified (in case \u03b8=26) 92.283.5 90.9 87.6
PCFG80.081.5 80.2 80.7
Lee [11]87.587.5 87.5 87.5
", "num": null } } } }