|
{ |
|
"paper_id": "O06-1004", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:07:51.394499Z" |
|
}, |
|
"title": "Automatic Learning of Context-Free Grammar", |
|
"authors": [ |
|
{ |
|
"first": "Tai-Hung", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Sun Yat-Sen University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Chun-Han", |
|
"middle": [], |
|
"last": "Tseng", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Sun Yat-Sen University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Chia-Ping", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Sun Yat-Sen University", |
|
"location": {} |
|
}, |
|
"email": "cpchen@cse.nsysu.edu.tw" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we study the problem of learning context-free grammar from a corpus. We investigate a technique that is based on the notion of minimum description length of the corpus. A cost as a function of grammar is defined as the sum of the number of bits required for the representation of a grammar and the number of bits required for the derivation of the corpus using that grammar. On the Academia Sinica Balanced Corpus with part-of-speech tags, the overall cost, or description length, reduces by as much as 14% compared to the initial cost. In addition to the presentation of the experimental results, we also include a novel analysis on the costs of two special context-free grammars, where one derives only the set of strings in the corpus and the other derives the set of arbitrary strings from the alphabet.", |
|
"pdf_parse": { |
|
"paper_id": "O06-1004", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we study the problem of learning context-free grammar from a corpus. We investigate a technique that is based on the notion of minimum description length of the corpus. A cost as a function of grammar is defined as the sum of the number of bits required for the representation of a grammar and the number of bits required for the derivation of the corpus using that grammar. On the Academia Sinica Balanced Corpus with part-of-speech tags, the overall cost, or description length, reduces by as much as 14% compared to the initial cost. In addition to the presentation of the experimental results, we also include a novel analysis on the costs of two special context-free grammars, where one derives only the set of strings in the corpus and the other derives the set of arbitrary strings from the alphabet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In this paper we study the problem of learning context-free grammar (CFG) [1] from a corpus of part-of-speech tags. The framework of CFG, although not complex enough to enclose all human languages [2] , is an approximation good enough for many purposes. For a natural language, a \"decent\" CFG can derive most sentences in the language. Put differently, with high probability, a sentence can be parsed by a parser based on the CFG.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 77, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 200, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Overview", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The main issue with CFG is how to get one. Generally speaking, learning contextfree grammar from sample text is a difficult task. In [3] , a context-free grammar which derives exactly one string is reduced to a simpler grammar generating the same string. This achieves a lossless data compression. In [4] , an algorithm of time complexity O(N 2 ) for learning stochastic context-free grammar (SCFG) is proposed, where N is the number of non-terminal symbols. This is a great reduction from the inside-outside algorithm which requires O(N 3 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 136, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 304, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Overview", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Context-free grammars can be used in many applications. In [5] , an automatic speech recognition system uses a dynamic programming algorithm for recognizing and parsing spoken word strings of a context-free grammar in the Chomsky normal form. CFG can also be used in software engineering. In [6] , the components in a source code that need to be renovated are recognized and new code segments are generated from context-free grammars. In addition, since parsing outputs larger and less-ambiguous meaning-bearing structures in the sentence, for high-level natural language processing tasks such as question answering [7] and interactive voice response [8] systems, the design and implementation of CFG can be crucial to their success.", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 62, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 295, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 616, |
|
"end": 619, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 651, |
|
"end": 654, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Overview", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "If the goal of learning is to acquire a grammar that derives most sentences in the domain of interest, then a good one is apparently domain-specific. An all-purpose CFG is not likely to be the best since it tends to derive a much larger set than is necessary. We thus propose to learn CFGs from corpus. The basic problem is this: Given a set of sentences, we want to find a set of derivation rules that can derive the original set of sentences. Note that there are infinitely many CFGs from which the original set of sentences can be derived. To discriminate one CFG from another, we will consider the costs they incur in deriving the original corpus. The cost functions will be defined shortly. Thus, we are proposing to find the set of rules that can derive the original language with the minimum cost.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Overview", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper is organized as follows. Following this introduction and review, we analyze two special cases of CFG and the proposed rules in Section 2. The experimental results are presented in Section 3 followed by discussion and comments. In Section 4, we summarize our work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Overview", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are two different kinds of costs in the description of a corpus by a CFG. The first kind is incurred from the representation of the CFG. A rule in a CFG is of the form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "A \u2192 \u03b2.", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "It consists of a non-terminal symbol A on the left-hand side and a string of symbols \u03b2 on the right-hand side. The cost of a rule is the number of bits needed to represent the left-hand side and right-hand side. For (1), this is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C R = (1 + |\u03b2|) log |\u03a3|,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where \u03a3 is the symbol set and |\u03a3| is the number of symbols in \u03a3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The second kind is the cost to derive the sentences given the rules. In order to derive a sentence W , the sequence of rules must be specified in the derivation from S 1 to W ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "S \u21d2 \u03b1 1 \u21d2 \u2022 \u2022 \u2022 \u21d2 W, or S * \u21d2 W,", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where we have adopted the notation defined in [1] . The sequence of rules always starts with one of the S-derivation rules 2 , S \u2192 \u03b1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 49, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "This step results in a derived string \u03b1. If there is no non-terminal symbols in \u03b1, we are done with the derivation. Otherwise, we expand the left-most non-terminal symbol, say X, in \u03b1 by one of its derivation bodies 3 . The process continues until there is no non-terminal symbols in the derived string, which will be the sentence W at that point. To illustrate, suppose we are given the CFG", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 217, |
|
"text": "3", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u23a7 \u23aa \u23aa \u23aa \u23aa \u23a8 \u23aa \u23aa \u23aa \u23aa \u23a9 R 1 (S) : S \u2192 XXC . . . R 1 (X) : X \u2192 AB . . .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "and we want to derive the sentence W = ABABC. For this example, one can verify that the derivation sequence is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "R 1 (S)R 1 (X)R 1 (X)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": ", where R t (Z) represents the tth Zderivation rule. The cost is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C D = m k=1 log |R(s k )| = log |R(S)| + log |R(X)| + log |R(X)|,", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where m is the number of rules in the derivation sequence, s k is the non-terminal symbol for the kth derivation, and |R(s k )| is the number of rules in the CFG using s k as the left-hand side.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Combining 2and 5, the total cost is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C = p i=1 C R (i) + q j=1 C D (j) = p i=1 n i log |\u03a3| + q j=1 m j k=1 log |R(s k )|,", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where p is the number of rules, q is the number of sentences, n i is the number of symbol tokens in rule i, and m j is the length of the derivation sequence for sentence j.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Cost Functions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We will analyze the costs for two special CFGs in this section. The first CFG, which we call the exhaustive CFG, uses every distinct sentence in the corpus as a direct derivation body of the start symbol S. The corpus is thus covered trivially. To compute the cost, we first rearrange the sentences in the lexicographic order and then move the repeated sentences to the back. The number of symbols for a rule is simply the number of words of the corresponding sentence n w , plus 1 (for the start symbol S), and |\u03a3| is the vocabulary size |V | of the corpus plus 1 (again for the start symbol). Thus the rule cost is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C R = n log |\u03a3| = (n w + 1) log(|V | + 1).", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In this case, each sentence is derived from S in one step, by specifying the correct one out of the |R(S)| rules. Thus the derivation cost for a sentence is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C D = log |R(S)|.", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Note that q is generally not equal to |R(S)| as there may be repeated sentences. Combining (7) and (8) , the total cost for the exhaustive CFG is", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 94, |
|
"text": "(7)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 99, |
|
"end": 102, |
|
"text": "(8)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "C = |R(S)| i=1 C R (i) + q j=1 C D (j) = |R(S)| i=1 (n w (i) + 1) log(|V | + 1) + q log |R(S)|. (9)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The second case, which we call the recursive CFG, uses recursive derivation for S,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "S \u2192 AS,", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "where the non-terminal A can be expanded to be any word in the vocabulary. Combined with the rule S \u2192 , this CFG clearly covers any string of the alphabet, \u03a3 * , which is a much larger set than any real corpus. The rule cost is significantly smaller in recursive CFG than that of the exhaustive CFG. The only rules are the two instances of S-derivation and the |V | instances of A-derivation, so the rule cost is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C R = n log |\u03a3|,", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "where n can be 1, 2 or 3 depending on the rule. The derivation cost, however, is much larger. To derive a sentence W of n w words, the recursive rule of S and substitution rule of A have to be applied alternatively for n w times, followed by a final rule of S \u2192 . Thus the derivation cost for a sentence is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C D = n w (1 + log |V |) + 1.", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Combining (11) and (12), the total cost for the recursive CFG is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C = 2+|V | i=1 n i log |\u03a3| + q j=1 C D (j) = (4 + 2|V |) log(|V | + 2) + q j=1 [n w (j)(1 + log |V |) + 1].", |
|
"eq_num": "(13)" |
|
} |
|
], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In Table 1 we list the costs of these cases computed on the Academia Sinica Balanced Corpus [9] (ASBC). The exhaustive CFG has a large rule cost (28.1 million bits) and a small derivation cost (4.1 mb). The recursive CFG has an extremely small rule cost (merely 607 bits) and an extremely large derivation cost (88.4 mb). To overall cost is higher for the recursive CFG (88.4 mb) than the exhaustive CFG (32.2 mb). From this table, one can see that there is a trade-off between the rule cost and the derivation cost. In addition, the numbers illustrate the important point that minimizing the rule cost alone will lead to a CFG that is inappropriate.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The exhaustive CFG is too restricted in the sense that it covers only those sentences seen in the learning corpus. The recursive CFG is too broad in the sense that it covers all sentences including the non-sense ones. Our goal is to strike a balance between these two extremes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Special-Case Analysis", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The special cases we analyze above do not have the minimum cost of all possible CFGs from which the corpus can be derived. To reduce the overall cost, we start with the initial CFG and then iteratively look for a new CFG rule. The kind of rules we investigate in this study is of the form X \u2192 Y Z.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Rules", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The introduction of such a rule to the exhaustive CFG described in Section 2.2 has the following impacts on the cost:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Rules", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 Each occurrence of Y Z is replaced by X, so the total number of symbol tokens in the S derivation rules is reduced.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Rules", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 |\u03a3| is incremented by 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Rules", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 The derivation cost may or may not change, depending on whether two or more of the S-derivation rules become identical.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Rules", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Since there are two symbols on the right-hand side, the number of candidate rules is |\u03a3 \u00d7 \u03a3| = |\u03a3| 2 , where \u03a3 is the current symbol set. To choose one, we compute the bigram counts of all bigrams and use the bigram with the highest count as the right-hand side of the new rule, whose left-hand side is a new symbol.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Rules", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We use the ASBC corpus for our experiments. In this corpus, the part-of-speech tag is labeled for each word. On the raw text data, we apply the following pre-processing steps:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1. The punctuation of period, question mark and exclamation mark are used to segment a sentence into multiple sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "2. The parenthesis tags are discarded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "3. The part-of-speech tag sequence is extracted for each sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The initial statistics of the data after pre-processing is summarized in Table 2 . A total of 229852 sentences are extracted and 203651 of them are distinct. The total number of tokens is 4.84 millions. Note that in the experiments, the symbols are the part-of-speech tags rather than the words for our CFG learning algorithm. This approach focuses more directly on the syntax and alleviates the issue of data sparsity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 80, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The learning process is an iterative algorithm. We start with the exhaustive CFG introduced in Section 2.2. In each epoch, we The representation cost as a function of the number of learned rules is presented in Figure 1 . There are three curves in the plot, representing the rule cost, the derivation cost and the total cost. The initial cost is 32.2 million bits, as we show in Section 2.2. As the learning process progresses, the two kinds of cost behave in different ways: the derivation cost stays constant while the rule cost decreases. The derivation cost is invariant for two reasons: 1) the number of S-derivation rules does not change and 2) there is no ambiguity in expanding non-S symbols, in our current learning scheme. The rule cost reduces because the decrease in the number of tokens in the rules outweighs the increase in the size of symbol set. As a result, the total cost reaches a minimum of 27.7 million bits when the 92nd rule is learned. The cost reduction is 14.0%. After the 92nd rule, the largest bigram count is not high enough for the reduction of the number of tokens to outweigh the increase in the alphabet, so the cost increases. The maximum bigram count is plotted against the epoch (number of rules learned) in Figure 2 . From this figure, one can see that the maximum bigram count decreases very fast. The top-20 rules learned from ASBC are listed in Table 3 . In this table, we also include examples of words and sentences from ASBC. In addition, the definition and more examples of the part-of-speech tags are listed in Table 4 . From Table 3 , one can see that the new symbols (M1, . . . , M20) here indeed represents larger phrasal structures than the basic part-of-speech tags. Furthermore, M7 and M9 embed M1, giving evidence for a deep parsing structure. In Figure 3 , two sentences in ASBC parsed based on the learned CFG (left) and parsed manually (right) are shown. We can see that the verb phrase (VP) structure of sentence (a) in both parses. For sentence (b), the VP is scattered in two subtrees M 40 and M 66. The symbol M 66 can be identified as a noun phrase (NP).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 219, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1245, |
|
"end": 1253, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1386, |
|
"end": 1393, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1557, |
|
"end": 1564, |
|
"text": "Table 4", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 1572, |
|
"end": 1579, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1800, |
|
"end": 1808, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The construction of a context-free grammar for a specific domain is a non-trivial task. To learn a CFG automatically from corpus, we define a cost function as the number of bits for the representation of CFG and sentence derivation. Our objective is to find a grammar that covers the learning corpus with the minimum cost. We analyze two extreme cases to illustrate the framework. The proposed rules are learned from heuristic bigram counting. The results show that on ASBC corpus, the reduction of cost is 14.0% of the initial cost.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "There are other kinds of CFG rules that are not considered in this study, such as the A \u2192 B|C rules. The candidate set of rules should be enlarged for more descriptive power. Another line of research is to extend the current work to the word level (as opposed to the part-of-speech level). This should be doable at least in a restricted domain. Finally, from the data compression and information theory [10] , one can design a different cost function that takes the symbol frequencies into account and achieves further reduction on the number of bits.", |
|
"cite_spans": [ |
|
{ |
|
"start": 403, |
|
"end": 407, |
|
"text": "[10]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "This work is supported by National Science Council under grant number 94-2213-E-110-061. We thank Sheng-Fu Wang and Chiao-Mei Wang for inspirational discussions. We also thank the reviewers for the thorough comments. Table 3 : Top-20 rules learned from the ASBC corpus. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 224, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "X \u2192 Y+Z (Y) (Z) M1 \u2192 DE+Na M2 \u2192 Na+Na M3 \u2192 Neu+Nf M4 \u2192 Na+D M5 \u2192 D+D M6 \u2192 D+VC M7 \u2192 Na+M1 M8 \u2192 Na+VC M9 \u2192 VH+M1 M10 \u2192 DE+Nv M11 \u2192 VH+Na M12 \u2192 P+Na M13 \u2192 P+Nc M14 \u2192 Nh+D M15 \u2192 Nep+Nf M16 \u2192 VC+Na M17 \u2192 Nc+Na M18 \u2192 Dfa+VH M19 \u2192 D+VH M20 \u2192 D+SHI AB -", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "S is known as the sentence symbol or the start symbol.2 The Z-derivation rules are those with Z as the left-hand side.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is also known as the leftmost derivation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Introduction to Automata Theory, Languages and Computation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hopcroft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Motwani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Ullman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. E. Hopcroft, R. Motwani and J. D. Ullman, \"Introduction to Automata Theory, Languages and Computation\", Addison-Wesley (2001).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Jurafsky and J. H. Martin, \"Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition\", Prentice Hall (2000).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Design of context-free grammars for lossless data compression", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "En-Hui", |
|
"middle": [], |
|
"last": "Kieffer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 1998 IEEE Information Theory Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "84--85", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John C. Kieffer and En-hui Yang, \"Design of context-free grammars for lossless data compression,\" Proceedings of the 1998 IEEE Information Theory Workshop, pp. 84- 85.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Reducing the computation complexity for inferring stochastic context-free grammar rules from example text", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Lucke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of ICASSP 1994", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "353--356", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Lucke, \"Reducing the computation complexity for inferring stochastic context-free grammar rules from example text\", Proceedings of ICASSP 1994, pp. 353-356.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Dynamic Programming Speech Recognition Using a Context-Free Grammar", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of ICASSP'87", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "69--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Ney, \"Dynamic Programming Speech Recognition Using a Context-Free Gram- mar\", Proceedings of ICASSP'87, pp. 69-72.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Generation of components for software renovation factories from context-free grammars", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Van Den Brand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Sellink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Verhoef", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Working Conference on Reverse Engineering", |
|
"volume": "97", |
|
"issue": "", |
|
"pages": "144--153", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark van den Brand, Alex Sellink, and Chris Verhoef, \"Generation of components for software renovation factories from context-free grammars\", In Working Conference on Reverse Engineering, IEEE Computer Society, WCRE97, pp. 144-153.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Parsing model for answer extraction in Chinese question answering system", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of IEEE NLP-KE '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "238--243", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Yuan and C. Wang, \"Parsing model for answer extraction in Chinese question answering system\", Proceedings of IEEE NLP-KE '05, pp. 238 -243.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Automatic creation and tuning of context free grammars for interactive voice response systems", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Balakrishna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Cave", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of IEEE NLP-KE '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "158--163", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Balakrishna, D. Moldovan, E.K. Cave, \"Automatic creation and tuning of context free grammars for interactive voice response systems\", Proceedings of IEEE NLP-KE '05, pp. 158 -163.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Elements of Information Theory", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Cover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Cover and J. Thomas, \"Elements of Information Theory\", John Wiley and Sons (1991).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "compute the bigram counts for each bigram, 2. make a new rule with the bigram of the largest count as the right-hand side, 3. update the alphabet (symbol set), rules and derivations, 4. update the costs.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "The cost as a function of the number of learned rules. The maximum bigram count as a function of the number of epochs.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Examples parsed by the learned CFG (left) and parsed manually (right). Here Cbb is conjunctive and VJ is transitive verb.", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td/><td colspan=\"3\">rule cost derivation cost total cost</td></tr><tr><td colspan=\"2\">G1 28.1m</td><td>4.1m</td><td>32.2m</td></tr><tr><td>G2</td><td>607</td><td>88.4m</td><td>88.4m</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Costs in bits of exhaustive (G1) and recursive (G2) CFGs.", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>|V |</td><td>q</td><td>|R(S)|</td><td>N q</td><td>N R</td></tr><tr><td colspan=\"5\">51 229852 203651 4838540 4729276</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Initial data statistics for ASBC after text pre-processing. |V | is the vocabulary size, q is the total number of sentences, |R(S)| is the total number of distinct sentences, N q is the total number of tokens in the corpus, and N R is the total number of tokens in the distinct sentences.", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>Name</td></tr><tr><td>A</td></tr><tr><td>D</td></tr><tr><td>DE</td></tr><tr><td>Dfa</td></tr><tr><td>Na</td></tr><tr><td>Nc</td></tr><tr><td>Neu</td></tr><tr><td>Nep</td></tr><tr><td>Nf</td></tr><tr><td>Nh</td></tr><tr><td>Nv</td></tr><tr><td>P</td></tr><tr><td>SHI</td></tr><tr><td>VH</td></tr><tr><td>VC</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Selected part-of-speech tags used in the ASBC corpus.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |