|
{ |
|
"paper_id": "O04-2005", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:00:54.724816Z" |
|
}, |
|
"title": "Reliable and Cost-Effective Pos-Tagging", |
|
"authors": [ |
|
{ |
|
"first": "Yu-Fang", |
|
"middle": [], |
|
"last": "Tsai", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Academia Sinica", |
|
"location": { |
|
"addrLine": "128 Academia Rd. Sec.2", |
|
"settlement": "Nankang", |
|
"region": "Taipei", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Keh-Jiann", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Academia Sinica", |
|
"location": { |
|
"addrLine": "128 Academia Rd. Sec.2", |
|
"settlement": "Nankang", |
|
"region": "Taipei", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "kchen@iis.sinica.edu.tw" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In order to achieve fast, high quality Part-of-speech (pos) tagging, algorithms should achieve high accuracy and require less manually proofreading. This study aimed to achieve these goals by defining a new criterion of tagging reliability, the estimated final accuracy of the tagging under a fixed amount of proofreading, to be used to judge how cost-effective a tagging algorithm is. In this paper, we also propose a new tagging algorithm, called the context-rule model, to achieve cost-effective tagging. The context rule model utilizes broad context information to improve tagging accuracy. In experiments, we compared the tagging accuracy and reliability of the context-rule model, Markov bi-gram model and word-dependent Markov bi-gram model. The result showed that the context-rule model outperformed both Markov models. Comparing the models based on tagging accuracy, the context-rule model reduced the number of errors 20% more than the other two Markov models did. For the best cost-effective tagging algorithm to achieve 99% tagging accuracy, it was estimated that, on average, 20% of the samples of ambiguous words needed to be rechecked. We also compared tradeoff between the amount of proofreading needed and final accuracy for the different algorithms. It turns out that an algorithm with the highest accuracy may not always be the most reliable algorithm.", |
|
"pdf_parse": { |
|
"paper_id": "O04-2005", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In order to achieve fast, high quality Part-of-speech (pos) tagging, algorithms should achieve high accuracy and require less manually proofreading. This study aimed to achieve these goals by defining a new criterion of tagging reliability, the estimated final accuracy of the tagging under a fixed amount of proofreading, to be used to judge how cost-effective a tagging algorithm is. In this paper, we also propose a new tagging algorithm, called the context-rule model, to achieve cost-effective tagging. The context rule model utilizes broad context information to improve tagging accuracy. In experiments, we compared the tagging accuracy and reliability of the context-rule model, Markov bi-gram model and word-dependent Markov bi-gram model. The result showed that the context-rule model outperformed both Markov models. Comparing the models based on tagging accuracy, the context-rule model reduced the number of errors 20% more than the other two Markov models did. For the best cost-effective tagging algorithm to achieve 99% tagging accuracy, it was estimated that, on average, 20% of the samples of ambiguous words needed to be rechecked. We also compared tradeoff between the amount of proofreading needed and final accuracy for the different algorithms. It turns out that an algorithm with the highest accuracy may not always be the most reliable algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Part-of-speech (pos) tagging for a large corpus is a labor intensive and time-consuming task. Most tagging algorithms try to achieve high accuracy, but 100% accuracy is an impossible goal. Even after tremendous amounts of time and labor are spent on the post-process of proofreading, many errors still exist in publicly available tagged corpora. Therefore, in order to achieve fast, high quality pos tagging, tagging algorithms should not only achieve high accuracy but also require less manually proofreading. In this paper, we propose a context-rule model to achieve both goals.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The first goal is to improve tagging accuracy. According to our observation, the pos tagging of a word depends on its context but not simply on its context category. Therefore, the proposed context-rule model utilizes a broad scope of context information to perform pos tagging of a word. Rich context information helps to improve the model coverage rate and tagging accuracy. The context-rule model will be described in more detail later in this paper. Our second goal is to reduce the manual editing effort. A new concept of reliable tagging is proposed. The idea is as follows. An evaluation score is assigned to each tagging decision as an indicator of tagging confidence. If a high confidence value is achieved, it indicates that the tagging decision is very likely correct. On the other hand, a low confidence value means that the tagging decision requires manual checking. If a tagging algorithm can achieve a high degree of reliability in evaluation, this means that most of the high confidence tagging results need not manually rechecked. As a result, the time and manual efforts required in the tagging process can be drastically reduced. The reliability of a tagging algorithm is defined as follows: Reliability = The estimated final accuracy achieved by the tagging model under the constraint that only a fixed number of target words with the lowest confidence values are manually proofread.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The notion of tagging reliability is slightly different from the notion of tagging accuracy since high accurate algorithm may require more manual proofreading than a reliable algorithm that achieves lower accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The rest of this paper is organized as follows. In section 2, the relation between reliability and accuracy is discussed. In section 3, three different tagging algorithms, the Markov pos bi-gram model, word-dependent Markov bi-gram model, and context-rule model, are discussed. In section 4, the three algorithms are compared based on tagging accuracy. In addition, confidence measures of tagging results are defined, and the most cost-effective algorithm is determined. Conclusions are drawn on section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The reported accuracy of automatic tagging algorithms ranges from about 95% to 96% [Chang et al., 1993; Lua, 1996; Liu et al., 1995] . If we can pinpoint errors, then only 4~5% of the target corpus has to be revised to achieve 100% accuracy. However, since the errors are not identified, conventionally, the whole corpus has to be re-examined. This is most tedious and time consuming since a practically useful tagged corpus is at least several million words in size. In order to reduce the amount manual editing required and speed up the process of constructing a large tagged corpus, only potential tagging errors should be rechecked manually [Kveton et al., 2002; Nakagawa et al., 2002] . The problem is how to find the potential errors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 103, |
|
"text": "[Chang et al., 1993;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 114, |
|
"text": "Lua, 1996;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 132, |
|
"text": "Liu et al., 1995]", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 645, |
|
"end": 666, |
|
"text": "[Kveton et al., 2002;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 667, |
|
"end": 689, |
|
"text": "Nakagawa et al., 2002]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reliability vs. Accuracy", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Suppose that a probabilistic-based tagging method assigns a probability to each pos of a target word by investigating the context of this target word w. The hypothesis is that if the probability of the top choice candidate is much higher than the probability of the second choice candidate , then the confidence value assigned to will also be higher. (Hereafter, for the purpose of simplification, we will use to stand for , if without confusing.) Likewise, if the probability is close to the probability , then the confidence value assigned to will also be lower. We aim to prove the above hypothesis by using empirical methods. For each different tagging method, we define its confidence measure according to the above hypothesis and examine whether tagging errors are likely to occur for words with low tagging confidence. If the hypothesis is true, we can proofread among the auto-tagged results only those words with low confidence values. Furthermore, the final accuracy of the tagging process after partial proofreading is done can also be estimated based on the accuracy of the tagging algorithm and the number of errors contained in the proofread data. For instance, suppose that a system has a tagging accuracy of 94%, and that K% of the target words with the lowest confidence scores covers 80% of the errors. After those K% of tagged words are proofread, 80% of the errors are fixed. Therefore, the reliability score of this tagging system of K% proofread words will be 1 -(error rate) * (reduced error rate) = 1 -((1 -accuracy rate) * 20%) = 1 -((1 -94%) * 20%) = 98.8%. On the other hand, suppose that another tagging system has a higher tagging accuracy of 96%, but that its confidence measure is not very high, such that K% of the words with the lowest confidence scores contains only 50% of the errors. Then the reliability of this system is 1 -((1 -96%) * 50%) = 98%, which is lower than that of the first system. That is to say, after expending the same amount of effort on manual proofreading, the first system achieves better results even though it has lower tagging accuracy. In other words, a reliable system is more cost-effective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reliability vs. Accuracy", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": ") , | ( 1 context w c P 1 c ) , | ( 2 context w c P 2 c 1 c ) (c P ) , | ( context w c P ) ( 1 c P ) ( 2 c P 1 c", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reliability vs. Accuracy", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In this paper, we will evaluate three different tagging algorithms based on the same training and testing data, compare them based on tgging accuracy, and determine the most reliable tagging algorithm among them. The three tagging algorithms are the Markov bi-gram model, word-dependent Markov model, and context-rule model. The training data and testing data were extracted from the Sinica corpus, a 5 million word balanced Chinese corpus with pos tagging [Chen et al., 1996] . The confidence measure was defined for each algorithm, and the final accuracy was estimated with the constraint that only a fixed amount of testing data needed to be proofread.", |
|
"cite_spans": [ |
|
{ |
|
"start": 457, |
|
"end": 476, |
|
"text": "[Chen et al., 1996]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Algorithms and Confidence Measures", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "its left/right context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table 1. Sample keyword-in-context file of the words '\u7814\u7a76' sorted according to", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u7684(DE) \u91cd\u8981(VH) \u7814\u7a76(Nv) \u6a5f\u69cb(Na) \u4e4b(DE) \u76f8\u7576(Dfa) \u91cd\u8996(VJ) \u7814\u7a76(Nv) \u958b\u767c(Nv) \uff0c(COMMACATEGORY) \u5167(Ncd) \u91cd\u9ede(Na) \u7814\u7a76(Nv) \u9700\u6c42(Na) \u3002(PERIODCATEGORY) \u4ecd(D) \u9650\u65bc(VJ) \u7814\u7a76(Nv) \u968e\u6bb5(Na) \u3002(PERIODCATEGORY) \u6c11\u65cf(Na) \u97f3\uf914(Na) \u7814\u7a76(VE) \u8005(Na) \u660e\uf9f7\u570b(Nb) \u8d74(VCL) \u9999\u6e2f(Nc) \u7814\u7a76(VE) \u8a72(Nes) \u5730(Na) \u4ea6(D) \u503c\u5f97(VH) \u7814\u7a76(VE) \u3002(PERIODCATEGORY) \u5408\u5b9c\u6027(Na) \u503c\u5f97(VH) \u7814\u7a76(VE) \u3002(PERIODCATEGORY) \uf901(D) \u503c\u5f97(VH) \u7814\u7a76(Nv) \u3002(PERIODCATEGORY)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table 1. Sample keyword-in-context file of the words '\u7814\u7a76' sorted according to", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It is easier to proofread and obtain consistent tagging results if proofreading is done by checking each ambiguous word in its keyword-in-context file. For instance, in Table 1 , the keyword-in-context file of the word '\u7814\u7a76' (research), which has pos of verb type VE and noun type Nv, is sorted according to its left/right context. Proofreaders can take the other examples as references to determine whether tagging results are correct. If all of the occurrences of ambiguous words had to be rechecked, this would require too much work. Therefore, only words with low confidence scores will be rechecked.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 176, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Table 1. Sample keyword-in-context file of the words '\u7814\u7a76' sorted according to", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A general confidence measure can be defined as ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table 1. Sample keyword-in-context file of the words '\u7814\u7a76' sorted according to", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The most widely used tagging models are the part-of-speech n-gram models, in particular, the bi-gram and tri-gram models. A bi-gram model looks at pairs of categories (or words) and uses the conditional probability of . The Markov assumption is that the probability of a pos occurring depends only on the pos before it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markov Bi-gram Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ") | ( 1 \u2212 k k c c P", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markov Bi-gram Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given a word sequence , the Markov bi-gram model searches for the pos sequence such that argmax \u03a0 * is achieved. In our experiment, since we were only focusing on the resolution of ambiguous words, a twisted Markov bi-gram model was applied. For each ambiguous target word, its pos with the highest model probability was tagged. The probability of each candidate pos for a target word was estimated as * * . We call this model the general Markov bi-gram model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markov Bi-gram Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "n w w ,... 1 n c c ,... 1 ) | ( k k c w P ) | ( 1 \u2212 k k c c P k c k w ) | ( 1 \u2212 k k c c P ) | ( 1 k k c c P + ) | ( k k c w P", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markov Bi-gram Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The difference between the general Markov bi-gram model and the word-dependent Markov bi-gram model lies in the way in which the statistical data for and is estimated. There are two approaches to estimating the probability. One is to count all the occurrences in the training data, and the other is to count only the occurrences in which each occurs. In other words, the algorithm tags the pos for , such that maximizes the probability of * * instead of maximizing the probability of * * . We call this model the word-dependent Markov bi-gram model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Dependent Markov Bi-gram Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ") | ( 1 \u2212 k k c c P ) | ( 1 k k c c P + k w k c k w k c ) , | ( 1 \u2212 k k k c w c P ) , | ( 1 k k k c w c P + ) | ( k k c w P ) | ( 1 \u2212 k k c c P ) | ( 1 k k c c P + ) | ( k k c w P", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Dependent Markov Bi-gram Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The dependency features utilized to determine the best pos-tag in Markov models are the categories of context words. In fact, in some cases, the best pos-tags might be determined by using other context features, such as context words [Brill, 1992] . In the context-rule model, broad context information is utilized to determine the best pos-tag. We extend the scope of the dependency context of a target word to its 2 by 2 context windows. Therefore, the context features of a word can be represented by the vector of . Each feature vector may be associated with a unique pos-tag or many ambiguous pos-tags. The association probability of a possible pos", |
|
"cite_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 247, |
|
"text": "[Brill, 1992]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-Rule Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "] , , , , , , , [ 2 2 1 1 1 1 2 2 c w c w c w c w \u2212 \u2212 \u2212 \u2212 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-Rule Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "c\u2032 is P( 0 c\u2032 | , feature vector). If for some ( , ), the value of P( | , feature vector) is not 1, then this means that the of cannot be uniquely determined by its context vector. Some additional features have to be incorporated to resolve the ambiguity. If the full scope of the context feature vector is used, data sparseness problem will seriously degrade the system performance. Therefore, partial feature vectors are used instead of full feature vectors. The partial feature vectors applied in our context-rule model are , , ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-Rule Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "0 w 0 w 0 c\u2032 0 c\u2032 0 w 0 c 0 w 1 \u2212 w 1 w 1 2 \u2212 \u2212 c c 2 1 c c 1 1 c c \u2212 1 2 \u2212 \u2212 c w 1 1 \u2212 \u2212 c w 2 1 w c", |
|
"eq_num": ", , , , and ." |
|
} |
|
], |
|
"section": "Context-Rule Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In the training stage, for each feature vector type, many rule instances are generated, and their probabilities associated with the pos of the target word are calculated. For instance, with the feature vector types , , , ,\u2026, we can extract the rule patterns of (\u5148\u751f), (\u4e4b\u9918), (Nb, Na), (Ng, COMMA), ...etc. associated with the pos VE of the target word from the following sentence while the target word is '\u7814\u7a76 research':", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-Rule Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "1 \u2212 w 1 w 1 2 \u2212 \u2212 c c 2 1 c c 1 \u2212 w 1 w 1 2 \u2212 \u2212 c c 2 1 c c \u5468 Tsou (Nb) \u5148\u751f Mr (Na) \u7814\u7a76 research (VE) \u4e4b\u9918 after (Ng) \uff0c(COMMA)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-Rule Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\"After Mr. Tsou has done his research,\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-Rule Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Through the investigation of all training data, various different rule patterns (associated with a candidate pos of a target word) are generated and their association probabilities of P( | , feature vector) derived. For instance, if we take those word sequences listed in 0 as training data and take as a feature pattern, and if we let '\u7814\u7a76' be the target word, then the rule pattern (VH, PERIOD) will be extracted, and we will derive the probabilities P(VE | '\u7814\u7a76', (VH, PERIOD)) = 2/3 and P(NV | '\u7814\u7a76', (VH, PERIOD)) = 1/3. The rule patterns and their association probability are used to determine the probability of each candidate pos of a target word in a testing sentence. Suppose that the target word has ambiguous categories , and context patterns pattern", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-Rule Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "0 c\u2032 0 w 1 1 c c \u2212 1 1 c c \u2212 0 w n c c c ,..., , 2 1 1 , pattern 2 , \u2026, pattern m ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-Rule Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "then, the probability of assigning tag to the target word is defined as follows: In other words, the probabilities of different patterns with the same candidate pos are accumulated and normalized by means of the total probability distributed to all the candidates as the probability of the candidate pos. The algorithm tags the pos of the highest probability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context-Rule Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For our experiments, the Sinica corpus was divided into two parts. The training data contained 90% of the corpus, while the testing data contained the remaining 10%. Only the target words with ambiguous pos were evaluated. We evaluated only on the ambiguous words with frequencies higher than or equal to 10 for sufficiency of the training data and testing data. Furthermore, the total token count of the words with frequencies less than 10 occupied only 0.4335% of all the ambiguous word tokens. Since those words had much less effect on the overall performance, we just ignored them to simplify the designs of the evaluated tagging systems in the experiments. Another important reason was that for those words with low frequencies, all their tagging results had to be rechecked anyway, since our experiments showed that low tagging accuracies were inevitable due to the lack of training data. We also examined the effects on the tagging accuracy and reliability on the words with variations on pos ambiguities and the amount of training data. Six ambiguous words with different frequencies, listed in Table 2 , were selected as our target words for detail examinations. The frequencies of some words were too low to provide enough training data, such as the words '\u63a1\u8a2a interview' and '\u6f14\u51fa perform' listed in 0. To solve the problem of data sparseness, the Jeffreys-Perks law, or Expected Likehood Estimation (ELE) [Manning et al., 1999] , was used as a smoothing method for all the tagging algorithms evaluated in the experiments. The probability was defined as ) ,...,", |
|
"cite_spans": [ |
|
{ |
|
"start": 1414, |
|
"end": 1436, |
|
"text": "[Manning et al., 1999]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1103, |
|
"end": 1110, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "( 1 n w w P N w w C n ) ,..., ( 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": ", where is the number of times that pattern occurs in the training data, and is the total number of training patterns. To smooth for an unseen event, the probability of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": ") ,..., ( 1 n w w C n w w ,..., 1 N ) ,..., ( 1 n w w P was redefined as \u03bb \u03bb B N w w C n + + ) ,..., ( 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": ", where denotes the number of all B pattern types in the training data and \u03bb denotes the default occurrence count for an unseen event. That is to say, we took a value \u03bb for an unseen event as its occurrence count. If the value of \u03bb was 0, this means that there was no smoothing process for the unseen event. The most widely used value for \u03bb is 0.5, which was also applied in our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "In the experiments, we compared the tagging accuracy of the three tagging algorithms as described in section 3. The experiment results are shown in Table 3 . It is obvious that the word-dependent Markov bi-gram model outperformed the general Markov bi-gram model. It reduced almost 30% the number of errors compared to the general Markov bi-gram model. As expected, the context-rule model performed the best for each selected word and the overall tagging accuracy. The tagging accuracy results for selected words show inconsistency. This is exemplified by the lower accuracy for the word '\u7814\u7a76 research'. It is believed that the flexible usage of '\u7814\u7a76 research' degraded the performances of the tagging algorithms. The lack of training data also hurt the performance of the tagging algorithms. The words with fewer training data, such as '\u63a1\u8a2a interview' and '\u6f14\u51fa perform', were also associated with poor tagging accuracy. Therefore, words with low frequencies should be handled using some general tagging algorithms to improve the overall performance of a tagging system. Furthermore, in future, word-dependent reliability criteria need to be studied.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 155, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tagging Accuracy", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the experiments on reliability, the confidence measure of the ratio of the probability gap between the top choice candidate and the second choice candidate was adopted for all three models. The tagging results with confidence scores lower than a pre-defined threshold were re-checked. Some tagging results were assigned the default pos (in general, the one with the highest frequency of the word) since there were no training patterns applicable to the tagging process. Those tagging results that were not covered by the training patterns also needed to be re-checked. With the increased pre-defined threshold, the amount of partial corpus that needed to be re-checked could be estimated automatically since the Sinica corpus provides the correct pos-tag for each target word. Furthermore, the final accuracy could be estimated if the corresponding amount of partial corpus was proofread. Figure 1 shows the results for the tradeoff between the amount of proofreading and the estimated final accuracy for the three algorithms. The x-coordinate indicates the portion of the partial corpus that needed to be manually proofread under a pre-defined threshold. The y-coordinate indicates the final accuracy after the corresponding portion of the corpus was proofread. Without any manual proofreading, the accuracy of the context-rule algorithm was about 1.4% higher than that of the word-dependent Markov bi-gram model. As the percentage of manual proofreading increased, the accuracy of each algorithm also increased. It is obvious that the accuracy of the context-rule model increased more slowly than did that of the two Markov models, as the amount of manual proofreading increased.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 892, |
|
"end": 900, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tagging Reliability", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The final accuracy results of the context-rule model and the two Markov models coincided at approximately 98.5% and 99.4%, with around 13% and 35% manual proofreading. After that, both Markov models achieved higher final accuracy than the context-rule model did when the amount of manual proofreading increased more. The results indicate that if the required tagging accuracy is over 98.5%, then the two Markov models will be better choices since in our experiments, they achieved higher final accuracy than the context-rule model did. It can also be concluded that an algorithm with higher accuracy may not always be an accurate algorithm. Figure 2 and Figure 3 show the error coverage of the six ambiguous target words after different portions of corpus are proofread respectively. It shows that not only tagging accuracy but also reliability were degraded due to the lack of sufficient training data. Tagging algorithms achieve better error coverage for target words with more training data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 641, |
|
"end": 649, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 654, |
|
"end": 662, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tagging Reliability", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "There is a tradeoff between amount of manual proofreading and the final accuracy. If the goal of tagging is to achieve 99% accuracy, then an estimated threshold value of the confidence score needed to achieve the target accuracy rate will be given, and a tagged word with a confidence score less than this designated threshold value will be checked. On the other hand, if the requirement is to finish the tagging process in a a limited amount of time and with limited amount of manual labor, then in order to achieve the desired final accuracy, we will first need to estimate the portion of the corpus which will have to be proofread, and then determine the threshold value of the confidence score. Figure 4 shows the error coverage of each different portions of corpus with the lowest confidence score. By proofreading the initial 10% low confidence tagging data we achieve the most improvement in accuracy. As the amount of proofread corpus increased, the level of accuracy decreased rapidly. The experimental results of tagging reliability can help us decide which is the most cost-effective tagging algorithm and how to proofread tagging results under constraints on the available human resources and time. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 699, |
|
"end": 707, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Tradeoff between the Amount of Manual Proofreading and the Final accuracy", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this paper, we have proposed a context-rule model for pos tagging. We have also proposed a new way of finding the most cost-effective tagging algorithm. Cost-effectiveness is defined based on a criterion of reliability. The reliability of the system is measured in terms of the confidence score for ambiguity resolution of each tagging. The basic observation of confidence tagging is as follows: the larger the gap between the candidate pos with the highest probability and other (the second, for example) candidate pos with lower probability, the more confidence can be placed in the tagging result. It is believed that the ability to resolve pos ambiguity plays a more important part than the confidence measurement in the tagging system, since a larger gap between the first candidate pos and the second candidate pos can result in a high confidence score. Therefore, another reasonable measurement of the confidence score will work as well as the one used in our experiments if the tagging algorithms have good ability to resolve pos ambiguity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "For the best cost-effective tagging algorithm, on average, 20% of the samples of ambiguous words need to be rechecked to achieve 99% accuracy. In other words, the manual labor of proofreading is reduced by more than 80%. Our study on tagging reliability, in fact, provides a way to determine the optimal tagging strategy under different constraints. The constraints might be to achieve the best tagging accuracy under time and labor constraints or to achieve a certain accuracy with the least effort possible expended on proofreading. For instance, if the goal of tagging is to achieve 99% accuracy, then a threshold value of the confidence score needed to achieve the target accuracy will be estimated, and a tagged word with a confidence score less than this designated threshold value will be checked. On the other hand, if the constraint is to finish the tagging process under time and manual labor constraints, then in order to achieve the desired final accuracy, we will first estimate the portion of the corpus that will have to be proofread, and then determine the threshold value of the confidence score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "In future, we will extend the coverage of confidence checking for all words, including words with single pos, to detect flexible word usages. The confidence measure for words with single pos can be obtained by comparing the tagging probability of the pos of the words with the probabilities of the other categories. Furthermore, since tagging accuracy and reliability are degrading due to the intrinsic complexity of word usage and the less amount of training data, we will study word-dependent reliability to overcome the degrading problems. There are many possible confidence measures. For instance is a reasonable alternative. We will study different alternatives in the future to obtain a more reliable confidence measure. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "The log-likelihood ratio of log(P(c 1 )/P(c 2 )) is an alternative confidence measure. However, some tagging algorithms, such as context-rule model, may not necessary produce a real probability estimation for each pos. Scaling control for the log-likelihood ratio will be hard for those algorithms to achieve. In addition, the range of our confidence score is 0.5 ~ 1.0 and it is thus easier to evaluate different tagging algorithms. Therefore, the above confidence value is adopted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The work was partially supported under NSC grant 92-2213-E-001-016.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement:", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "HMM-based Part-of-Speech Tagging for Chinese Corpora", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"H D" |
|
], |
|
"last": "Chang & C", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of the Workshop on Very Large Corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. H. Chang & C. D. Chen, 1993, \"HMM-based Part-of-Speech Tagging for Chinese Corpora,\" in Proceedings of the Workshop on Very Large Corpora, Columbus, Ohio, pp. 40-47.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Category Guessing for Chinese Unknown Words", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Bai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "&", |
|
"middle": [ |
|
"K J" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of NLPRS97", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "35--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. J. Chen, M. H. Bai, & K. J. Chen, 1997, \"Category Guessing for Chinese Unknown Words,\" in Proceedings of NLPRS97, Phuket, Thailand, pp. 35-40.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Foundations of Statistical Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning & Hinrich Schutze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "202--204", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning & Hinrich Schutze, Foundations of Statistical Natural Language Processing, The MIT Press, 1999, pp. 43-45, pp. 202-204.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A Simple Rule-Based Part-of-Speech Taggers", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of ANLP-92, 3rd Conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "152--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Brill, \"A Simple Rule-Based Part-of-Speech Taggers,\" in Proceedings of ANLP-92, 3rd Conference on Applied Natural Language Processing 1992, pp. 152-155.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Sinica Corpus: Design Methodology for Balanced Corpora", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Hsu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of PACLIC II", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "167--176", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. J. Chen, C. R. Huang, L. P. Chang, & H. L. Hsu, 1996, \"Sinica Corpus: Design Methodology for Balanced Corpora,\" in Proceedings of PACLIC II, Seoul, Korea, pp. 167-176.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Part of Speech Tagging of Chinese Sentences Using Genetic Algorithm", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Lua", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of ICCC96", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--49", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. T. Lua, 1996, \"Part of Speech Tagging of Chinese Sentences Using Genetic Algorithm,\" in Proceedings of ICCC96, National University of Singapore, pp. 45-49.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Semi-) Automatic Detection of Errors in Pos-Tagged Corpora", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Kveton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "&", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Oliva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of Coling", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "509--515", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Kveton & K. Oliva, 2002, \"(Semi-) Automatic Detection of Errors in Pos-Tagged Corpora,\" in Proceedings of Coling 2002, Taipei, Taiwan, pp. 509-515.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Automatic Part-of-Speech Tagging for Chinese Corpora", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Chin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Computer Proceeding of Oriental Languages", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "31--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. H. Liu, K. J. Chen, L. P. Chang, & Y. H. Chin, 1995, \"Automatic Part-of-Speech Tagging for Chinese Corpora,\" on Computer Proceeding of Oriental Languages, Hawaii, Vol. 9, pp.31-48.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Detecting Errors in Corpora Using Support Vector Machines", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Nakagawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "& Y", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of Coling", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "709--715", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Nakagawa & Y. Matsumoto, 2002, \"Detecting Errors in Corpora Using Support Vector Machines,\" in Proceedings of Coling 2002, Taipei, Taiwan, pp.709-715.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The common terms used in the following tagging algorithms discussed below are defined as follows:", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "n", |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Tradeoff between the amount of manual proofreading and the final accuracy.", |
|
"uris": null |
|
}, |
|
"FIGREF6": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Figure 2. Error coverage of word-dependent Markov model after amount of corpus is proofread.", |
|
"uris": null |
|
}, |
|
"FIGREF7": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Error coverage of context-rule model after amount of corpus is proofread.", |
|
"uris": null |
|
}, |
|
"FIGREF9": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Error coverage rate of different portion of corpus to be proofread.", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"text": "", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Word</td><td>Frequency</td><td/><td colspan=\"3\">Ambiguity (Pos-Count)</td><td/></tr><tr><td>\u4e86</td><td>47607</td><td>Di-36063</td><td>T-11504</td><td>VJ-25</td><td>VC-11</td><td/></tr><tr><td>\u5c07</td><td>13188</td><td>D-7599</td><td>P-5547</td><td>Na-27</td><td>Di-8</td><td>VC-5</td></tr><tr><td>\u7814\u7a76</td><td>4734</td><td>Nv-3695</td><td>VE-1032</td><td>VC-6</td><td>VA-1</td><td/></tr><tr><td>\u6539\u8b8a</td><td>1298</td><td>VC-953</td><td>Na-345</td><td/><td/><td/></tr><tr><td>\u6f14\u51fa</td><td>723</td><td>VC-392</td><td>Na-331</td><td/><td/><td/></tr><tr><td>\u63a1\u8a2a</td><td>121</td><td>VC-70</td><td>Nv-45</td><td>Na-6</td><td/><td/></tr><tr><td/><td>Word</td><td colspan=\"2\">General Markov</td><td colspan=\"2\">Word-Depend. Markov</td><td>Context-Rule</td></tr><tr><td/><td>\u4e86</td><td colspan=\"2\">96.95 %</td><td>97.92 %</td><td/><td>98.87 %</td></tr><tr><td/><td>\u5c07</td><td colspan=\"2\">93.47 %</td><td>93.17 %</td><td/><td>95.52 %</td></tr><tr><td/><td>\u7814\u7a76</td><td colspan=\"2\">80.76 %</td><td>79.28 %</td><td/><td>81.40 %</td></tr><tr><td/><td>\u6539\u8b8a</td><td colspan=\"2\">87.60 %</td><td>89.92 %</td><td/><td>93.02 %</td></tr><tr><td/><td>\u63a1\u8a2a</td><td colspan=\"2\">68.06 %</td><td>63.89 %</td><td/><td>77.78 %</td></tr><tr><td/><td>\u6f14\u51fa</td><td colspan=\"2\">41.67 %</td><td>66.67 %</td><td/><td>66.67 %</td></tr><tr><td colspan=\"2\">Average of 6 words</td><td colspan=\"2\">94.56 %</td><td>95.12 %</td><td/><td>96.60 %</td></tr><tr><td colspan=\"2\">Average of all</td><td colspan=\"2\">91.07 %</td><td>94.07 %</td><td/><td>95.08 %</td></tr><tr><td colspan=\"2\">ambiguous words</td><td/><td/><td/><td/><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |