{ "paper_id": "O03-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:01:04.959210Z" }, "title": "Reliable and Cost-Effective PoS-Tagging", "authors": [ { "first": "Yu-Fang", "middle": [], "last": "Tsai", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica Nankang", "location": { "postCode": "115", "settlement": "Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica Nankang", "location": { "postCode": "115", "settlement": "Taipei", "country": "Taiwan" } }, "email": "kchen@iis.sinica.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In order to achieve fast and high quality Part-of-speech (PoS) tagging, algorithms should be high accuracy and require less manually proofreading. To evaluate a tagging system, we proposed a new criterion of reliability, which is a kind of cost-effective criterion, instead of the conventional criterion of accuracy. The most cost-effective tagging algorithm is judged according to amount of manual editing and achieved final accuracy. The reliability of a tagging algorithm is defined to be the estimated best accuracy of the tagging under a fixed amount of proofreading. We compared the tagging accuracies and reliabilities among different tagging algorithms, such as Markov bi-gram model, Bayesian classifier, and context-rule classifier. According to our experiments, for the best cost-effective tagging algorithm, in average, 20% of samples of ambivalence words need to be rechecked to achieve an estimated final accuracy of 99%. The tradeoffs between amount of proofreading and final accuracy for different algorithms are also compared. It concludes that an algorithm with highest accuracy may not always be the most reliable algorithm.", "pdf_parse": { "paper_id": "O03-1010", "_pdf_hash": "", "abstract": [ { "text": "In order to achieve fast and high quality Part-of-speech (PoS) tagging, algorithms should be high accuracy and require less manually proofreading. To evaluate a tagging system, we proposed a new criterion of reliability, which is a kind of cost-effective criterion, instead of the conventional criterion of accuracy. The most cost-effective tagging algorithm is judged according to amount of manual editing and achieved final accuracy. The reliability of a tagging algorithm is defined to be the estimated best accuracy of the tagging under a fixed amount of proofreading. We compared the tagging accuracies and reliabilities among different tagging algorithms, such as Markov bi-gram model, Bayesian classifier, and context-rule classifier. According to our experiments, for the best cost-effective tagging algorithm, in average, 20% of samples of ambivalence words need to be rechecked to achieve an estimated final accuracy of 99%. The tradeoffs between amount of proofreading and final accuracy for different algorithms are also compared. It concludes that an algorithm with highest accuracy may not always be the most reliable algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Part-of-speech tagging for a large corpus is a labor intensive and time-consuming task. Most of time and labors were spent on proofreading and never achieved 100% accuracy, as exemplified by many public available corpora. Since manual proofreading is inevitable, how do we derive the most cost-effective tagging algorithm? To reduce efforts of manual editing, a new concept of reliable tagging was proposed. The idea is as follows. An evaluation score, as an indicator of tagging confidence, is made for each tagging decision. If a high confidence value is achieved, it indicates that this tagging decision is very likely correct. On the other hand, a low confidence value means the tagging result might require manual checking. If a tagging algorithm can provide a very reliable confidence evaluation, it means that most of high confidence tagging results need not manually checked. As a result, time and manual efforts for tagging processes can be reduced drastically. The reliability of a tagging algorithm is defined as follows. Reliability = The estimated final accuracy achieved by the tagging model under the constraint that only a fixed amount target words with the lowest confidence value is manually proofread.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It is slightly different from the notion of tagging accuracy. It is possible that a higher accuracy algorithm might require more manual proofreading than a reliable algorithm with lower accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The tagging accuracies were compared among different tagging algorithms, such as Markov PoS bi-gram model, Bayesian classifier, and context-rule classifier. In addition, confidence measures of the tagging will be defined. In this paper, the above three algorithms are designed and the most cost-effective algorithm is also determined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The reported accuracies of automatic tagging algorithms are about 95% to 96% (Chang et al., 1993; Lua, 1996; Liu et al., 1995 ). If we can pinpoint the errors, only 4~5% of the target corpus has to be revised to achieve 100% accuracy. However, since the occurrences of errors are unknown, conventionally the whole corpus has to be reexamined. It is most tedious and time consuming, since a practically useful tagged corpus is at least in the size of several million words. In order to reduce the manual editing and speed up the construction process of a large tagged corpus, only potential errors of tagging will be rechecked manually (Kveton et al., 2002; Nakagawa et al., 2002) . The problem is how we find the potential errors. Suppose that a probabilistic-based tagging method will assign a probability to each PoS of a target word by investigating the context of this target word w. The hypothesis is that if the probability of the top choice candidate is much higher than the probability of the second choice candidate , then the confidence value assigned for is also higher. (Hereafter, for simplification, if without confusing, we will use to stand for .) Likewise, if the probability is closer to the probability , then the confidence value assigned for is also lower. We try to prove the above hy-", "cite_spans": [ { "start": 73, "end": 97, "text": "96% (Chang et al., 1993;", "ref_id": null }, { "start": 98, "end": 108, "text": "Lua, 1996;", "ref_id": "BIBREF5" }, { "start": 109, "end": 125, "text": "Liu et al., 1995", "ref_id": "BIBREF7" }, { "start": 635, "end": 656, "text": "(Kveton et al., 2002;", "ref_id": "BIBREF6" }, { "start": 657, "end": 679, "text": "Nakagawa et al., 2002)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Reliability vs. Accuracy", "sec_num": "2" }, { "text": ") , | ( 1 context w c P ) context ) c 1 c , | ( 2 w c P 1 , | ( context w c P 2 c c ) (c P ( P ) ( 1 c P ) c 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability vs. Accuracy", "sec_num": "2" }, { "text": "pothesis by empirical methods. For each different tagging method, we define its confidence measure according to the above hypothesis and to see whether or not tagging errors are generally occurred at the words with low tagging confidence. If the hypothesis is true, we can proofread the auto-tagged results only on words with low confidence values. Furthermore, the final accuracy of the tagging after partial proofreading can also be estimated by the accuracy of the tagging algorithm and the amount of errors contained in the proofread data. For instance, a system has a tagging accuracy of 94% and supposes that K% of the target words with the lowest confidence scores covers 80% of errors. After proofreading those K% of words in the tagged words, those 80% errors are fixed. Therefore the reliability score of this tagging system of K% proofread will be 1 -(error rate) * (reduced error rate) = 1 -((1 -accuracy rate) * 20%) = 1 -((1 -94%) * 20%) = 0.988. On the other hand, another tagging system has a higher tagging accuracy of 96%, but its confidence measure is not very reliable, such that the K% of the words with the lowest confidence scores contains only 50% of errors. Then the reliability of this system is 1 -((1 -96%) * 50%) = 0.980, which is lower than the first system. That is to say after spending the same amount of effort of manual proofreading, the first system achieves a better results even it has lower tagging accuracy. In other word, a reliable system is more cost-effective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability vs. Accuracy", "sec_num": "2" }, { "text": "In this study, we are going to test three different tagging algorithms based on same training data and testing data, and to find out the most reliable tagging algorithm. The three tagging algorithms are (Chen et al., 1996) . The confidence measure will be defined for each algorithm and the best accuracy will be estimated at the constraint of only a fixed amount of testing data being proofread.", "cite_spans": [ { "start": 203, "end": 222, "text": "(Chen et al., 1996)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Tagging Algorithms and Confidence Measures", "sec_num": "3" }, { "text": "\u7684(DE) \u91cd\u8981(VH) \u7814\u7a76(Nv) \u6a5f\u69cb(Na) \u4e4b(DE) \u76f8\u7576(Dfa) \u91cd\u8996(VJ) \u7814\u7a76(Nv) \u958b\u767c(Nv) \uff0c(COMMACATEGORY) \u5167(Ncd) \u91cd\u9ede(Na) \u7814\u7a76(Nv) \u9700\u6c42(Na) \u3002(PERIODCATEGORY) \u4ecd(D) \u9650\u65bc(VJ) \u7814\u7a76(Nv) \u968e\u6bb5(Na) \u3002(PERIODCATEGORY) \u6c11\u65cf(Na) \u97f3\u6a02(Na) \u7814\u7a76(VE) \u8005(Na) \u660e\u7acb\u570b(Nb) \u8d74(VCL) \u9999\u6e2f(Nc) \u7814\u7a76(VE) \u8a72(Nes) \u5730(Na) \u4ea6(D) \u503c\u5f97(VH) \u7814\u7a76(VE) \u3002(PERIODCATEGORY) \u5408\u5b9c\u6027(Na) \u503c\u5f97(VH) \u7814\u7a76(VE) \u3002(PERIODCATEGORY) \u66f4(D) \u503c\u5f97(VH) \u7814\u7a76(Nv) \u3002(PERIODCATEGORY)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Algorithms and Confidence Measures", "sec_num": "3" }, { "text": "It is easier to proofread and make more consistent tagging results, if proofreading processes were done by checking the keyword-in-context file for each ambivalence word and only the tagging results of ambivalence word need to be proofread. The words with single PoS need not be rechecked their PoS tagging. For instance, in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Algorithms and Confidence Measures", "sec_num": "3" }, { "text": "w w ,... 1 | ( k c w P n c c ,... 1 ) k ) | ( 1 \u2212 k k c c P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Algorithms and Confidence Measures", "sec_num": "3" }, { "text": "1 Log-likelihood ratio of log P(c1)/P(c2) is another alternation of confidence measure. However, for some tagging algorithms, they may not necessary produce real probability estimation for each PoS, such as context-rule model. The scaling control for log-likelihood ratio will be hard for those algorithms. In addition, the range of our confidence score is between 0.5~1.0. Therefore, the above confidence value is adopted. only focusing on the resolution of ambivalence words only, a twisted Markov bi-gram model was applied. For each ambivalence target word, its PoS with the highest model probability is tagged. The probability of each candidate PoS for a target word is estimated by * * . There are two approaches to estimate the statistical data for and . One is to count all the occurrences in the training data, and another one is to count only the occurrences in which each occurs. According to the experiments, to estimate the statistic data using dependent data is better than using all sequences. In other words, the algorithm tags the PoS for , such that maximizes the probability of * * instead of maximizing the probability of * *", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Algorithms and Confidence Measures", "sec_num": "3" }, { "text": ". k c k w | k c k w | 1 k \u2212 1 k c + ) | ( 1 \u2212 k k c c P | ( 1 \u2212 k k c c P ) , | 1 \u2212 k k k c w ) | ( 1 k k c c P + ) | 1 \u2212 k k c c k c k w ) ) | ( 1 k k c c P + | ( 1 k c P + , | ( 1 k k w c P + ) | ( k k c w P , | ( 1 k k w c P \u2212 k c ) | ( k k c w P ) k c k w k c ) k | ( k w P ) k ( 1 k c P + ) k w ) k c , | k w k c ) k w (c P (c P ) 1 \u2212 ( P | k w c c | ( k k c c P ) k ( k c P (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Algorithms and Confidence Measures", "sec_num": "3" }, { "text": "The Bayesian classifier algorithm adopts the Bayes theorem (Manning et al., 1999) ", "cite_spans": [ { "start": 59, "end": 81, "text": "(Manning et al., 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Bayesian Classifier", "sec_num": "3.2" }, { "text": ", | ( w P k ) k c ) , | c w 1 k k k \u2212 k c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Classifier", "sec_num": "3.2" }, { "text": "Dependency features utilized in determining the best PoS-tag in both Markov and Bayesian models are categories of context words. As a matter of fact, for some cases the best PoS-tags might be de-termined by other context features, such as context words (Brill, 1992) . In the context-rule model, broader scope of context information is utilized in determining the best PoS-tag. We extend the scope of the dependency context of a target word into its 2 by 2 context windows. Therefore the context features of a word can be represented by the vector of . ", "cite_spans": [ { "start": 253, "end": 266, "text": "(Brill, 1992)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Context-Rule Model", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "0 w 0 w 1 \u2212 2 1 c c 1 1 c c \u2212 1 2 \u2212 \u2212 c w 1 1 \u2212 \u2212 c w 1 \u2212 w 1 w 2 \u2212 c c 2 1 1 2 \u2212 \u2212 2 1 w c", "eq_num": ", , , , and ." } ], "section": "Context-Rule Model", "sec_num": "3.3" }, { "text": "At the training stage, for each feature vector type, many rule instances will be generated and their probabilities associated with PoS of the target word are also calculated. For instance, with the fea- By investigating all training data, various different rule patterns (associated with a candidate PoS of a target word) will be generated and their association probabilities of P( 0 c\u2032 | , feature vector) are also derived. For instance, If we take those word sequences listed in Table 1 as training data and as feature pattern, and set '\u7814\u7a76' as target word, we would train with a result containing a rule pattern = (VH, PERIOD) and derive the probabilities of P(VE | '\u7814\u7a76', (VH, PERIOD)) = 2/3 and P(NV | '\u7814\u7a76', (VH, PERIOD)) = 1/3. The rule patterns and their association probability will be utilized to determine the probability of each candidate PoS of a target word in a testing sentence.", "cite_spans": [], "ref_spans": [ { "start": 481, "end": 488, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Context-Rule Model", "sec_num": "3.3" }, { "text": "Suppose that the target word has ambiguous categories of , and the context patterns of , then the probability to assign tag to the target word is defined as follows: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Rule Model", "sec_num": "3.3" }, { "text": "0 w n i c 1 1 c c \u2212 w 1 1 c c \u2212 pattern 0 w c ,..., c c , 2 1 m pattern pattern ,..., ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Rule Model", "sec_num": "3.3" }, { "text": "There is a tradeoff between amount of manual proofreading and the best accuracy. If the goal of tagging is to achieve an accuracy of 99%, then an estimated threshold value of confidence score to achieve the target accuracy will be given and the tagged word with confidence score less than this designated threshold value will be checked. On the other hand, if the constraint is to finish the tagging process under the constraints of limited time and manual labors, in order to achieve the best accuracy, we will first estimate the amount of partial corpus which can be proofread under the constrained time and labors, and then determine the threshold value of the confidence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tradeoffs between Amount of Manual Proofreading and the Best Accuracy", "sec_num": "4" }, { "text": "The six ambivalence words with different frequencies, listed in Table 2 , were picked as our target words in the experiments. We like to see the tagging accuracy and confidence measure effected by variation of ambivalence and the amount of training data among selected target words. The Sinica corpus is divided into two parts as our training data and testing data. The training data contains 90% of the corpus, while the testing data is the remaining 10%.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Tradeoffs between Amount of Manual Proofreading and the Best Accuracy", "sec_num": "4" }, { "text": "Some words' frequencies are too low to have enough training data, such as the target words '\u63a1\u8a2a interview' and '\u6f14\u51fa perform'. To solve the problem of data sparseness, the Jeffreys-Perks law, or", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tradeoffs between Amount of Manual Proofreading and the Best Accuracy", "sec_num": "4" }, { "text": "Expected Likehood Estimation (ELE) (Manning et al., 1999) , is introduced as the smoothing method for all evaluated tagging algorithms. The probability is defined as ) ,...,", "cite_spans": [ { "start": 35, "end": 57, "text": "(Manning et al., 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Tradeoffs between Amount of Manual Proofreading and the Best Accuracy", "sec_num": "4" }, { "text": "( 1 n w w P N w w C n ) ,..., ( 1 N , where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tradeoffs between Amount of Manual Proofreading and the Best Accuracy", "sec_num": "4" }, { "text": "is the amount that pattern occurs in the training data, and is the total amount of all training patterns. To smooth for an unseen event, the probability of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tradeoffs between Amount of Manual Proofreading and the Best Accuracy", "sec_num": "4" }, { "text": "is redefined as ) ,..., ( 1 n w w C ) ,..., n w n w w ,..., 1 ( 1 w P \u03bb \u03bb B N w w C n + + ) ,..., ( 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tradeoffs between Amount of Manual Proofreading and the Best Accuracy", "sec_num": "4" }, { "text": ", where denotes the amount of all pattern types in training data and B \u03bb denotes the default occurrence count for an unseen event. That is to say, we assume a value \u03bb for an unseen event as its occurrence count. If the value of \u03bb is 0, it means that there is no smoothing process for the unseen events. The most widely used value for \u03bb is 0.5, which is also applied in the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tradeoffs between Amount of Manual Proofreading and the Best Accuracy", "sec_num": "4" }, { "text": "In our experiments, the confidence measure of the ratio of probability gap between top choice candidate and the second choice candidate ) ( ) ( ) ( Figure 1 Tradeoffs between amount of manual proofreading and the best accuracy increases, too. It is obvious to see that the accuracy of context-rule algorithm increases slower than those of other two algorithms while the amount of manual proofreading increases more. The values of best accuracy of three algorithms will meet in a point of 99% approximately, with around 20% of required manual proofreading on result tags. After the meeting point, Bayesian classifier and", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 156, "text": "Figure 1", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Tradeoffs between Amount of Manual Proofreading and the Best Accuracy", "sec_num": "4" }, { "text": "Markov bi-gram model will have higher value of best accuracy than context-rule classifier when the amount of manual proofreading is over 20% of the tagged results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tradeoffs between Amount of Manual Proofreading and the Best Accuracy", "sec_num": "4" }, { "text": "The result picture shows that if the required tagging accuracy is over 99% and there are plenty of labors and time available for manual proofreading, the Bayesian classifier and Markov bi-gram model would be better choices, since they have higher best accuracies than the context-rule classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tradeoffs between Amount of Manual Proofreading and the Best Accuracy", "sec_num": "4" }, { "text": "In this paper, we proposed a new way of finding the most cost-effective tagging algorithm. The cost-effective is defined in term of a criterion of reliability. The reliability of the system is measured in term of confidence score of ambiguity resolution of each tagging. For the best cost-effective tagging algorithm, in average, 20% of samples of ambivalence words need to be rechecked to achieve an accuracy of 99%. In other word, the manual labor of proofreading is reduced more than 80%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In future, we like to extend the coverage of confidence checking for all words, including words with single PoS, to detect flexible word uses. The confidence measure for words with single PoS can be made by comparing the tagging probability of this particular PoS with all other categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "The authors would like to thank the anonymous reviews' valuable comments and suggestions. The work is partially supported by the grant of NSC 92-2213-E-001-016.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement:", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "HMM-based Part-of-Speech Tagging for Chinese Corpora", "authors": [ { "first": "C", "middle": [ "H D" ], "last": "Chang & C", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "40--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. H. Chang & C. D. Chen, 1993, \"HMM-based Part-of-Speech Tagging for Chinese Corpora,\" in Proceedings of the Workshop on Very Large Corpora, Columbus, Ohio, pp. 40-47.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Category Guessing for Chinese Unknown Words", "authors": [ { "first": "C", "middle": [ "J" ], "last": "Chen", "suffix": "" }, { "first": "M", "middle": [ "H" ], "last": "Bai", "suffix": "" }, { "first": "&", "middle": [ "K J" ], "last": "Chen", "suffix": "" } ], "year": 1997, "venue": "Proceedings of NLPRS97", "volume": "", "issue": "", "pages": "35--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. J. Chen, M. H. Bai, & K. J. Chen, 1997, \"Category Guessing for Chinese Unknown Words,\" in Proceedings of NLPRS97, Phuket, Thailand, pp. 35-40.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning & Hinrich Schutze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "202--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning & Hinrich Schutze, Foundations of Statistical Natural Language Processing, The MIT Press, 1999, pp. 43-45, pp. 202-204.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Simple Rule-Based Part-of-Speech Taggers", "authors": [ { "first": "E", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1992, "venue": "Proceedings of ANLP-92, 3rd Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "152--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Brill, \"A Simple Rule-Based Part-of-Speech Taggers,\" in Proceedings of ANLP-92, 3rd Conference on Ap- plied Natural Language Processing 1992, pp. 152-155.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Sinica Corpus: Design Methodology for Balanced Corpora", "authors": [ { "first": "K", "middle": [ "J" ], "last": "Chen", "suffix": "" }, { "first": "C", "middle": [ "R" ], "last": "Huang", "suffix": "" }, { "first": "L", "middle": [ "P" ], "last": "Chang", "suffix": "" }, { "first": "H", "middle": [ "L" ], "last": "Hsu", "suffix": "" } ], "year": 1996, "venue": "Proceedings of PACLIC II", "volume": "", "issue": "", "pages": "167--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. J. Chen, C. R. Huang, L. P. Chang, & H. L. Hsu, 1996, \"Sinica Corpus: Design Methodology for Balanced Corpora,\" in Proceedings of PACLIC II, Seoul, Korea, pp. 167-176.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Part of Speech Tagging of Chinese Sentences Using Genetic Algorithm", "authors": [ { "first": "K", "middle": [ "T" ], "last": "Lua", "suffix": "" } ], "year": 1996, "venue": "Proceedings of ICCC96", "volume": "", "issue": "", "pages": "45--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. T. Lua, 1996, \"Part of Speech Tagging of Chinese Sentences Using Genetic Algorithm,\" in Proceedings of ICCC96, National University of Singapore, pp. 45-49.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semi-) Automatic Detection of Errors in PoS-Tagged Corpora", "authors": [ { "first": "P", "middle": [], "last": "Kveton", "suffix": "" }, { "first": "&", "middle": [ "K" ], "last": "Oliva", "suffix": "" } ], "year": 2002, "venue": "Proceedings of Coling", "volume": "", "issue": "", "pages": "509--515", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Kveton & K. Oliva, 2002, \"(Semi-) Automatic Detection of Errors in PoS-Tagged Corpora,\" in Proceedings of Coling 2002, Taipei, Tai-wan, pp. 509-515.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic Part-of-Speech Tagging for Chinese Corpora", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Liu", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Chen", "suffix": "" }, { "first": "L", "middle": [ "P" ], "last": "Chang", "suffix": "" }, { "first": "Y", "middle": [ "H" ], "last": "Chin", "suffix": "" } ], "year": 1995, "venue": "on Computer Proceeding of Oriental Languages", "volume": "9", "issue": "", "pages": "31--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. H. Liu, K. J. Chen, L. P. Chang, & Y. H. Chin, 1995, \"Automatic Part-of-Speech Tagging for Chinese Cor- pora,\" on Computer Proceeding of Oriental Languages, Hawaii, Vol. 9, pp.31-48.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Detecting Errors in Corpora Using Support Vector Machines", "authors": [ { "first": "T", "middle": [], "last": "Nakagawa", "suffix": "" }, { "first": "& Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2002, "venue": "Proceedings of Coling", "volume": "", "issue": "", "pages": "709--715", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Nakagawa & Y. Matsumoto, 2002, \"Detecting Errors in Corpora Using Support Vector Machines,\" in Pro- ceedings of Coling 2002, Taipei, Taiwan, pp.709-715.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "the value of P( c 0 \u2032 | , feature vector) is zero which means there is no training examples with the same context vector of . If the full scope of the context feature vector is used, data sparseness problem will seriously hurt the system performance. Therefore partial feature vectors are used instead of full feature vectors. The partial feature vectors applied in our context-rule classifier are , , ,", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "COMMA), ... etc, associated with the PoS VE of target word from the following sentence while the target word is '\u7814\u7a76 research'. Tsou (Nb) \u5148\u751f Mr (Na) \u7814\u7a76 research (VE) \u4e4b\u9918 after (Ng) \uff0c(COMMA) \" After Mr. Tsou has done his research,\"", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": ", the probabilities of different patterns with the same candidate PoS are accumulated and normalized by the total probability distributed to all candidates as the probability of the candidate PoS. The algorithm will tag the PoS of the highest probability.", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "all three different models.", "num": null }, "FIGREF4": { "uris": null, "type_str": "figure", "text": "shows the result pictures of tradeoffs between amount of proofreading and the estimated best accuracies for the three different algorithms. Without any manual proofreading on result tags, the accuracy of context-rule algorithm is about 1.4% higher than the Bayesian classifier and Markov bi-gram model. As the percentage of manual proofreading increases, the accuracy of each algorithm", "num": null }, "TABREF0": { "content": "", "html": null, "num": null, "text": "", "type_str": "table" }, "TABREF1": { "content": "
A general confidence measure was defined as the value ofP() P 1 c) P 1 c + ((c2), whereP( 1 c)is the
probability of the top choice PoS1 cassigned by the tagging algorithm andP( 2 c)is the probabil-
ity of the second choice PoS2 c 1 . The common terms used in the following tagging algorithms were
also defined as follows:
wkThe k-th word in a sequence
ckThe PoS associated with k-th word k w
1 c w ,..., 1wncnA word sequence containingnwords with their associated categories respectively
3.1 Markov Bi-gram Model
Pk ( c | ck1 \u2212)
Given a word sequencen, the Markov bi-gram model searches for the PoS sequence
such that argmax \u03a0*is achieved. In our experiment, since we are
", "html": null, "num": null, "text": "The proofreader can see the other examples as references to determine whether or not each tagging result is correct. If all of the occurrences of ambivalence word have to be rechecked, it is still too much of the work. Therefore only words with low confidence scores will be rechecked.The most widely used tagging models are part-of-speech n-gram models, in particular bi-gram and tri-gram model. In a bi-gram model, it looks at pair of categories (or words) and uses the conditional probability of , and the Markov assumption is that the probability of a PoS occurring depends only on the PoS before it.", "type_str": "table" }, "TABREF5": { "content": "", "html": null, "num": null, "text": "Target words used in the experiments", "type_str": "table" } } } }