{ "paper_id": "I05-1047", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:25:14.269120Z" }, "title": "A Chunking Strategy Towards Unknown Word Detection in Chinese Word Segmentation", "authors": [ { "first": "Zhou", "middle": [], "last": "Guodong", "suffix": "", "affiliation": {}, "email": "zhougd@i2r.a-star.edu.sg" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposes a chunking strategy to detect unknown words in Chinese word segmentation. First, a raw sentence is pre-segmented into a sequence of word atoms 1 using a maximum matching algorithm. Then a chunking model is applied to detect unknown words by chunking one or more word atoms together according to the word formation patterns of the word atoms. In this paper, a discriminative Markov model, named Mutual Information Independence Model (MIIM), is adopted in chunking. Besides, a maximum entropy model is applied to integrate various types of contexts and resolve the data sparseness problem in MIIM. Moreover, an error-driven learning approach is proposed to learn useful contexts in the maximum entropy model. In this way, the number of contexts in the maximum entropy model can be significantly reduced without performance decrease. This makes it possible for further improving the performance by considering more various types of contexts. Evaluation on the PK and CTB corpora in the First SIGHAN Chinese word segmentation bakeoff shows that our chunking approach successfully detects about 80% of unknown words on both of the corpora and outperforms the best-reported systems by 8.1% and 7.1% in unknown word detection on them respectively.", "pdf_parse": { "paper_id": "I05-1047", "_pdf_hash": "", "abstract": [ { "text": "This paper proposes a chunking strategy to detect unknown words in Chinese word segmentation. First, a raw sentence is pre-segmented into a sequence of word atoms 1 using a maximum matching algorithm. Then a chunking model is applied to detect unknown words by chunking one or more word atoms together according to the word formation patterns of the word atoms. In this paper, a discriminative Markov model, named Mutual Information Independence Model (MIIM), is adopted in chunking. Besides, a maximum entropy model is applied to integrate various types of contexts and resolve the data sparseness problem in MIIM. Moreover, an error-driven learning approach is proposed to learn useful contexts in the maximum entropy model. In this way, the number of contexts in the maximum entropy model can be significantly reduced without performance decrease. This makes it possible for further improving the performance by considering more various types of contexts. Evaluation on the PK and CTB corpora in the First SIGHAN Chinese word segmentation bakeoff shows that our chunking approach successfully detects about 80% of unknown words on both of the corpora and outperforms the best-reported systems by 8.1% and 7.1% in unknown word detection on them respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Prior to any linguistic analysis of Chinese text, Chinese word segmentation is the necessary first step and one of major bottlenecks in Chinese information processing since a Chinese sentence is written in a continuous string of characters without obvious separators (such as blanks) between the words. During the past two decades, this research has been a hot topic in Chinese information processing [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] .", "cite_spans": [ { "start": 401, "end": 404, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 405, "end": 408, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 409, "end": 412, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 413, "end": 416, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 417, "end": 420, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 421, "end": 424, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 425, "end": 428, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 429, "end": 432, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 433, "end": 436, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 437, "end": 441, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There exist two major problems in Chinese word segmentation: ambiguity resolution and unknown word detection. While n-gram modeling and/or word cooccurrence has been successfully applied to deal with the ambiguity problems [3, 5, 10, 12, 13] , unknown word detection has become the major bottleneck in Chinese 1 In this paper, word atoms refer to basic building units in words. For example, the word \"\u8ba1\u7b97 \u673a \" (computer) consists of two word atoms: \" \u8ba1 \u7b97 \"(computing) and \" \u673a \"(machine). Generally, word atoms can either occur independently, e.g. \"\u8ba1\u7b97\"(computing), or only become a part of a word, e.g. \"\u673a\"(machine) in the word \"\u8ba1\u7b97\u673a\" (computer). word segmentation. Currently, almost all Chinese word segmentation systems rely on a word dictionary. The problem is that when the words stored in the dictionary are insufficient, the system's performance will be greatly deteriorated by the presence of words that are unknown to the system. Moreover, manual maintenance of a dictionary is very tedious and time consuming. It is therefore important for a Chinese word segmentation system to identify unknown words from the text automatically.", "cite_spans": [ { "start": 223, "end": 226, "text": "[3,", "ref_id": "BIBREF2" }, { "start": 227, "end": 229, "text": "5,", "ref_id": "BIBREF4" }, { "start": 230, "end": 233, "text": "10,", "ref_id": "BIBREF9" }, { "start": 234, "end": 237, "text": "12,", "ref_id": "BIBREF11" }, { "start": 238, "end": 241, "text": "13]", "ref_id": "BIBREF12" }, { "start": 310, "end": 311, "text": "1", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In literature, two categories of competing approaches are widely used to detect unknown words 2 : statistical approaches [5, 11, 12, 13, 14, 15] and rule-based approaches [5, 11, 14, 15] . Although rule-based approaches have the advantage of being simple, the complexity and domain dependency of how the unknown words are produced greatly reduce the efficiency of these approaches. On the other hand, statistical approaches have the advantage of being domain-independent [16] . It is interesting to note that many systems apply a hybrid approach [5, 11, 14, 15] . Regardless of the choice of different approaches, finding a way to automatically detect unknown words has become a crucial issue in Chinese word segmentation and Chinese information processing in general.", "cite_spans": [ { "start": 94, "end": 95, "text": "2", "ref_id": "BIBREF1" }, { "start": 121, "end": 124, "text": "[5,", "ref_id": "BIBREF4" }, { "start": 125, "end": 128, "text": "11,", "ref_id": "BIBREF10" }, { "start": 129, "end": 132, "text": "12,", "ref_id": "BIBREF11" }, { "start": 133, "end": 136, "text": "13,", "ref_id": "BIBREF12" }, { "start": 137, "end": 140, "text": "14,", "ref_id": "BIBREF13" }, { "start": 141, "end": 144, "text": "15]", "ref_id": "BIBREF14" }, { "start": 171, "end": 174, "text": "[5,", "ref_id": "BIBREF4" }, { "start": 175, "end": 178, "text": "11,", "ref_id": "BIBREF10" }, { "start": 179, "end": 182, "text": "14,", "ref_id": "BIBREF13" }, { "start": 183, "end": 186, "text": "15]", "ref_id": "BIBREF14" }, { "start": 471, "end": 475, "text": "[16]", "ref_id": "BIBREF15" }, { "start": 546, "end": 549, "text": "[5,", "ref_id": "BIBREF4" }, { "start": 550, "end": 553, "text": "11,", "ref_id": "BIBREF10" }, { "start": 554, "end": 557, "text": "14,", "ref_id": "BIBREF13" }, { "start": 558, "end": 561, "text": "15]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Input raw sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u5f20 \u6770 \u6bd5 \u4e1a \u81ea \u4ea4 \u901a \u5927 \u5b66 . MMA pre-segmentation: \u5f20 \u6770 \u6bd5 \u4e1a \u81ea \u4ea4 \u901a \u5927 \u5b66 . Unknown word detection: \u5f20 \u6770 \u6bd5 \u4e1a \u81ea \u4ea4 \u901a \u5927 \u5b66 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "graduate from JiaoTong University.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Zhang Jie", "sec_num": null }, { "text": "This paper proposes a chunking strategy to cope with unknown words in Chinese word segmentation. First, a raw sentence is pre-segmented into a sequence of word atoms (i.e. single-character words and multi-character words) using a maximum matching algorithm (MMA) 3 . Then a chunking model is applied to detect unknown words by chunking one or more word atoms together according to the word formation patterns of the word atoms. Figure 1 gives an example. Here, the problem of unknown word detection is re-cast as chunking one or more word atoms together to form a new word and a discriminative Markov model, named Mutual Information Independence Model (MIIM), is adopted in chunking. Besides, a maximum entropy model is applied to integrate various types of contexts and resolve the data sparseness problem in MIIM. Moreover, an error-driven learning approach is proposed to learn useful 2 Some systems [13, 14] focus on proper names due to their importance in Chinese information processing. 3 A typical MMA identifies all character sequences which are found in the word dictionary and marks them as words. Those character sequences, which can be segmented in more than one way, are marked as ambiguous and a word unigram model is applied to choose the most likely segmentation sequence. The remaining sequences, i.e. those not found in the dictionary, are called fragments and segmented into single characters. In this way, each Chinese sentence is pre-segmented into a sequence of single-character words and multicharacter words. For convenience, we call these single-character words and multi-character words in the output of the MMA algorithm as word atoms.", "cite_spans": [ { "start": 263, "end": 264, "text": "3", "ref_id": "BIBREF2" }, { "start": 888, "end": 889, "text": "2", "ref_id": "BIBREF1" }, { "start": 903, "end": 907, "text": "[13,", "ref_id": "BIBREF12" }, { "start": 908, "end": 911, "text": "14]", "ref_id": "BIBREF13" }, { "start": 993, "end": 994, "text": "3", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 428, "end": 436, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Fig. 1. MMA and unknown word detection by chunking: an example", "sec_num": null }, { "text": "contexts in the maximum entropy model. In this way, the number of contexts in the maximum entropy model can be significantly reduced without performance decrease. This makes it possible for further improving the performance by considering more various types of contexts in the future. Evaluation on the PK and CTB corpora in the First SIGHAN Chinese word segmentation bakeoff shows that our chunking strategy performs best in unknown word detection on both of the corpora. The rest of the paper is as follows: In Section 2, we will discuss in details about our chunking strategy in unknown word detection. Experimental results are given in Section 3. Finally, some remarks and conclusions are made in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fig. 1. MMA and unknown word detection by chunking: an example", "sec_num": null }, { "text": "In this section, we will first describe the chunking strategy in unknown word detection of Chinese word segmentation using a discriminative Markov model, called Mutual Information Independence Model (MIIM). Then a maximum entropy model is applied to integrate various types of contexts and resolve the data sparseness problem in MIIM. Finally, an error-driven learning approach is proposed to select useful contexts and reduce the context feature vector dimension. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unknown Word Detection by Chunking", "sec_num": "2" }, { "text": "The second term in Equation 1is the pair-wise mutual information (PMI) between n S 1 and n O 1 . In order to simplify the computation of this term, we assume a pair-wise mutual information independence (2):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mutual Information Independence Model and Unknown Word Detection", "sec_num": "2.1" }, { "text": "\u2211 = = n i n i n n O s PMI O S PMI 1 1 1 1 ) , ( ) , ( or \u2211 = \u22c5 = \u22c5 n i n i n i n n n n O P s P O s P O P S P O S P 1 1 1 1 1 1 1 ) ( ) ( ) , ( log ) ( ) ( ) , ( log (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mutual Information Independence Model and Unknown Word Detection", "sec_num": "2.1" }, { "text": "That is, an individual state is only dependent on the observation sequence n O 1 and independent on other states in the state sequence n S 1 . This assumption is reasonable because the dependence among the states in the state sequence n S 1 has already been captured by the first term in Equation (1). Applying Equation 2to Equation (1), we have Equation (3) 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mutual Information Independence Model and Unknown Word Detection", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2211 \u2211 = = \u2212 + = n i n i n i i i n n O s P S s PMI O S P 1 1 2 1 1 1 1 ) | ( log ) , ( ) | ( log", "eq_num": "(3)" } ], "section": "Mutual Information Independence Model and Unknown Word Detection", "sec_num": "2.1" }, { "text": "We call the above model as shown in Equation 3the Mutual Information Independence Model due to its pair-wise mutual information assumption as shown in Equation 2. The above model consists of two sub-models: the state transition model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mutual Information Independence Model and Unknown Word Detection", "sec_num": "2.1" }, { "text": "\u2211 = \u2212 n i i i S s PMI 2 1 1 ) , (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mutual Information Independence Model and Unknown Word Detection", "sec_num": "2.1" }, { "text": "as the first term in Equation 3and the output model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mutual Information Independence Model and Unknown Word Detection", "sec_num": "2.1" }, { "text": "\u2211 = n i n i O s P 1 1 ) | ( log", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mutual Information Independence Model and Unknown Word Detection", "sec_num": "2.1" }, { "text": "as the second term in Equation 3. Here, a variant of the Viterbi algorithm [19] in decoding the standard Hidden Markov Model (HMM) [18] is implemented to find the most likely state sequence by replacing the state transition model and the output model of the standard HMM with the state transition model and the output model of the MIIM, respectively.", "cite_spans": [ { "start": 75, "end": 79, "text": "[19]", "ref_id": "BIBREF18" }, { "start": 131, "end": 135, "text": "[18]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Mutual Information Independence Model and Unknown Word Detection", "sec_num": "2.1" }, { "text": "For unknown word detection by chunking, a word (known word or unknown word) is regarded as a chunk of one or more word atoms and we have: 3, we can see that the state transition model of MIIM can be computed by using ngram modeling [20, 21, 22] , where each tag is assumed to be dependent on the N-1 previous tags (e.g. 2). The problem with the above MIIM lies in the data sparseness problem raised by its output model:", "cite_spans": [ { "start": 232, "end": 236, "text": "[20,", "ref_id": "BIBREF19" }, { "start": 237, "end": 240, "text": "21,", "ref_id": "BIBREF20" }, { "start": 241, "end": 244, "text": "22]", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Unknown Word Detection", "sec_num": null }, { "text": "\u2022 > =< i i i w p o , ; i w is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unknown Word Detection", "sec_num": null }, { "text": "\u2211 = n i n i O s P 1 1 ) | ( log . Ideally, we", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unknown Word Detection", "sec_num": null }, { "text": "would have sufficient training data for every event whose conditional probability we wish to calculate. Unfortunately, there is rarely enough training data to compute accurate probabilities when decoding on new data. Generally, two smoothing approaches [21, 22, 23] are applied to resolve this problem: linear interpolation and back-off. However, these two approaches only work well when the number of different information sources is very limited. When a few features and/or a long context are considered, the number of different information sources is exponential. This makes smoothing approaches inappropriate in our system. In this paper, the maximum entropy model [24] is proposed to integrate various context information sources and resolve the data sparseness problem in our system. The reason that we choose the maximum entropy model for this purpose is that it represents the state-ofthe-art in the machine learning research community and there are good implementations of the algorithm available. Here, we use the open NLP maximum entropy package 6 in our system.", "cite_spans": [ { "start": 253, "end": 257, "text": "[21,", "ref_id": "BIBREF20" }, { "start": 258, "end": 261, "text": "22,", "ref_id": "BIBREF21" }, { "start": 262, "end": 265, "text": "23]", "ref_id": "BIBREF22" }, { "start": 669, "end": 673, "text": "[24]", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Unknown Word Detection", "sec_num": null }, { "text": "The maximum entropy model is a probability distribution estimation technique widely used in recent years for natural language processing tasks. The principle of the maximum entropy model in estimating probabilities is to include as much information as is known from the data while making no additional assumptions. The maximum entropy model returns the probability distribution that satisfies the above property with the highest entropy. Formally, the decision function of the maximum entropy model can be represented as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Entropy", "sec_num": "2.2" }, { "text": "\u220f = = k j o h f j j h Z h o P 1 ) , ( ) ( 1 ) , ( \u03b1 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Entropy", "sec_num": "2.2" }, { "text": "where o is the outcome, h is the history (context feature vector in this paper), Z(h) is a normalization function, {f 1 , f 2 , ..., f k } are feature functions and {\u03b1 1 , \u03b1 2 , \u2026, \u03b1 k } are the model parameters. Each model parameter corresponds to exactly one feature and can be viewed as a \"weight\" for that feature. All features used in the maximum entropy model are binary, e.g. : current word atom formation pattern, current word atom, next word atom formation pattern and next word atom However, there exists a problem when we include above various context information in the maximum entropy model: the context feature vector dimension easily becomes too large for the model to handle. One easy solution to this problem is to only keep those frequently occurring contexts in the model. Although this frequency filtering approach is simple, many useful contexts may not occur frequently and be filtered out while those kept may not be useful. To resolve this problem, we propose an alternative error-driven learning approach to only keep useful contexts in the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Entropy", "sec_num": "2.2" }, { "text": "Here, we propose an error-driven learning approach to examine the effectiveness of various contexts and select useful contexts to reduce the size of the context feature vector used in the maximum entropy model for estimating", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": ") | ( 1 n i O s P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": "in the output model of MIIM. This makes it possible to further improve the performance by incorporating more various types of contexts in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": "Assume \u03a6 is the container for useful contexts. Given a set of existing useful contexts \u03a6 and a set of new contexts \u2206\u03a6 , the effectiveness of a new context", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": "i C \u2206\u03a6 \u2208 , ) , ( i C E \u03a6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": ", is measured by the i C -related reduction in errors which results from adding the new context set \u2206\u03a6 to the useful context set \u03a6 : , we declare that the new context i C is a useful context and should be added to \u03a6 . Otherwise, the new context i C is considered useless and discarded. Given the above error-driven learning approach, we initialize } { i p = \u03a6 (i.e. we assume all the current word atom formation patterns are useful contexts) and choose one of the other context types as the new context set \u2206\u03a6 , e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") , ( # ) , ( # ) , ( i i i C Error C Error C E \u2206\u03a6 + \u03a6 \u2212 \u03a6 = \u03a6", "eq_num": "(6)" } ], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": "} { i i w p = \u03a6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": ". Then, we can train two MIIMs with different output models using \u03a6 and \u2206\u03a6 + \u03a6 respectively. Moreover, useful contexts are learnt on the training data in a two-fold way. For each fold, two MIIMs are trained on 50% of the training data and for each new context i C in \u2206\u03a6 , evaluate its effectiveness ) , (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": "i C E \u03a6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": "on the remaining 50% of the training data according to the context effectiveness measure as shown in Equation (6) ", "cite_spans": [ { "start": 110, "end": 113, "text": "(6)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": ". If 0 ) , ( > \u03a6 i C E", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": ", i C is marked as a useful context and added to \u03a6 . In this way, all the useful contexts in \u2206\u03a6 are incorporated into the useful context set \u03a6 . Similarly, we can include useful contexts of other context types into the useful context set \u03a6 one by one. In this paper, various types of contexts are learnt one by one in the exact same order as shown in Section 2.2. Finally, since different types of contexts may have cross-effects, the above process is iterated with the renewed useful context set \u03a6 until very few useful contexts can be found at each loop. Our experiments show that iteration converges within four loops.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Feature Selection Using Error-Driven Learning", "sec_num": "2.3" }, { "text": "All of our experiments are evaluated on the PK and CTB benchmark corpora used in the First SIGHAN Chinese word segmentation bakeoff 7 with the closed configuration. That is, only the training data from the particular corpus is used during training. For unknown word detection, the chunking training data is derived by using the same Maximum Matching Algorithm (MMA) to segment each word in the original training data as a chunk of word atoms. This is done in a two-fold way. For each fold, the MMA is trained on 50% of the original training data and then used to segment the remaining 50% of the original training data. Then the MIIM is used to train a chunking model for unknown word detection on the chunking training data. Table 1 shows the details of the two corpora. Here, OOV is defined as the percentage of words in the test corpus not occurring in the training corpus and indicates the out-ofvocabulary rate in the test corpus. Table 2 shows the detailed performance of our system in unknown word detection and Chinese word segmentation as a whole using the standard scoring script 8 on the test data. In this and subsequent tables, various evaluation measures are provided: precision (P), recall (R), F-measure, recall on out-of-vocabulary words ( OOV R ) and recall on in-vocabulary words ( IV R ). It shows that our system achieves precision/recall/F-measure of 93.5%/96.1%/94.8 and 90.5%/90.1%/90.3 on the PK and CTB corpora respectively. Especially, our chunking approach can successfully detect 80.5% and 77.6% of unknown words on the PK and CTB corpora respectively. Table 3 and Table 4 compare our system with other best-reported systems on the PK and CTB corpora respectively. Table 3 shows that our chunking approach in unknown word detection outperforms others by more than 8% on the PK corpus. It also shows that our system performs comparably with the best reported systems on the PK corpus when the out-of-vocabulary rate is moderate(6.9%). Our performance in Chinese word segmentation as a whole is somewhat pulled down by the lower performance in recalling in-vocabulary words. This may be due to the preference of our chunking strategy in detecting unknown words by wrongly combining some of invocabulary words into unknown words. Such preference may cause negative effect in Chinese word segmentation as a whole when the gain in unknown word detection fails to compensate the loss in wrongly combining some of in-vocabulary words into unknown words. This happens when the out-of-vocabulary rate is not high, e.g. on the PK corpus. Table 4 shows that our chunking approach in unknown word detection outperforms others by more than 7% on the CTB corpus. It also shows that our system outperforms the other best-reported systems by more than 2% in Chinese word segmentation as a whole on the CTB corpus. This is largely due to the huge gain in unknown word detection when the out-of-vocabulary rate is high (e.g. 18.1% in the CTB corpus), even though our system performs worse on recalling in-vocabulary words than others. Evaluation on both the PK and CTB corpora shows that our chunking approach can successfully detect about 80% of unknown words on corpora with a large range of the out-of-vocabulary rates. This suggests the powerfulness of using various word formation patterns of word atoms in detecting unknown words. This also demonstrates the effectiveness and robustness of our chunking approach in unknown word detection of Chinese word segmentation and its portability to different genres. Finally, Table 5 and Table 6 compare our error-driven learning approach with the frequency filtering approach in learning useful contexts for the output model of MIIM on the PK and CTB corpora respectively. Due to memory limitation, at most 400K useful contexts are considered in the frequency filtering approach. First, they show that the error-driven learning approach is much more effective than the simple frequency filtering approach. With the same number of useful contexts, the errordriven learning approach outperforms the frequency filtering approach by 7.8%/0.6% and 5.5%/0.8% in OOV R (unknown word detection)/F-measure(Chinese word segmentation as a whole) on the PK and CTB corpora respectively. Moreover, the error-driven learning approach slightly outperforms the frequency filtering approach with the best configuration of 2.5 and 3.5 times of useful contexts. Second, they show that increasing the number of frequently occurring contexts using the frequency filtering approach may not increase the performance. This may be due to that some of frequently occurring contexts are noisy or useless and including them may have negative effect. Third, they show that the error-driven learning approach is effective in learning useful contexts by reducing 96-98% of possible contexts. Finally, the figures inside parentheses show the number of useful patterns shared between the error-driven learning approach and the frequency filtering approach. They show that about 40-50% of useful contexts selected using the error-driven learning approach do not occur frequently in the useful contexts selected using the frequency filtering approach. ", "cite_spans": [], "ref_spans": [ { "start": 726, "end": 733, "text": "Table 1", "ref_id": "TABREF4" }, { "start": 936, "end": 943, "text": "Table 2", "ref_id": "TABREF5" }, { "start": 1582, "end": 1589, "text": "Table 3", "ref_id": null }, { "start": 1594, "end": 1601, "text": "Table 4", "ref_id": null }, { "start": 1694, "end": 1701, "text": "Table 3", "ref_id": null }, { "start": 2557, "end": 2564, "text": "Table 4", "ref_id": null }, { "start": 3534, "end": 3541, "text": "Table 5", "ref_id": null }, { "start": 3546, "end": 3553, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "3" }, { "text": "In this paper, a chunking strategy is presented to detect unknown words in Chinese word segmentation by chunking one or more word atoms together according to the various word formation patterns of the word atoms. Besides, a maximum entropy model is applied to integrate various types of contexts and resolve the data sparseness problem in our strategy. Finally, an error-driven learning approach is proposed to learn useful contexts in the maximum entropy model. In this way, the number of contexts in the maximum entropy model can be significantly reduced without performance decrease. This makes it possible for further improving the performance by considering more various types of contexts. Evaluation on the PK and CTB corpora in the First SIGHAN Chinese word segmentation bakeoff shows that our chunking strategy can detect about 80% of unknown words on both of the corpora and outperforms the best-reported systems by 8.1% and 7.1% in unknown word detection on them respectively. While our Chinese word segmentation system with chunkingbased unknown word detection performs comparably with the best systems on the PK corpus when the out-of-vocabulary rate is moderate(6.9%), our system significantly outperforms others by more than 2% when the out-of-vocabulary rate is high (18.1%) . This demonstrates the effectiveness and robustness of our chunking strategy in unknown word detection of Chinese word segmentation and its portability to different genres.", "cite_spans": [ { "start": 1282, "end": 1289, "text": "(18.1%)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "We have renamed the discriminative Markov model in[17] as the Mutual Information Independence Model according to the novel pair-wise mutual information independence assumption in the model. Another reason is to distinguish it from the traditional Hidden Markov Model[18] and avoid misleading.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Details about the derivation are omitted due to space limitation. Please see[17] for more.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://maxent.sourceforge.net", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.sighan.org/bakeoff2003/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.sighan.org/bakeoff2003/score", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On methods of Chinese automatic segmentation", "authors": [ { "first": "C", "middle": [ "Y" ], "last": "Jie", "suffix": "" }, { "first": "Y", "middle": [], "last": "Liu", "suffix": "" }, { "first": "N", "middle": [ "Y" ], "last": "Liang", "suffix": "" } ], "year": 1989, "venue": "Journal of Chinese Information Processing", "volume": "3", "issue": "1", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jie CY, Liu Y and Liang NY. (1989). On methods of Chinese automatic segmentation, Journal of Chinese Information Processing, 3(1):1-9.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Segmenting Chinese word and processing different meanings structure", "authors": [ { "first": "K", "middle": [ "C" ], "last": "Li", "suffix": "" }, { "first": "Liu", "middle": [], "last": "Ky", "suffix": "" }, { "first": "Y", "middle": [ "K" ], "last": "Zhang", "suffix": "" } ], "year": 1988, "venue": "Journal of Chinese Information Processing", "volume": "2", "issue": "3", "pages": "27--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li KC, Liu KY and Zhang YK. (1988). Segmenting Chinese word and processing different meanings structure, Journal of Chinese Information Processing, 2(3):27-33.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The knowledge of Chinese word segmentation", "authors": [ { "first": "N", "middle": [ "Y" ], "last": "Liang", "suffix": "" } ], "year": 1990, "venue": "Journal of Chinese Information Processing", "volume": "4", "issue": "2", "pages": "29--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang NY, (1990). The knowledge of Chinese word segmentation, Journal of Chinese Information Processing, 4(2):29-33.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "From character to word -An application of information theory", "authors": [ { "first": "K", "middle": [ "T" ], "last": "Lua", "suffix": "" } ], "year": 1990, "venue": "Computer Processing of Chinese & Oriental Languages", "volume": "4", "issue": "4", "pages": "304--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lua KT, (1990). From character to word -An application of information theory, Computer Processing of Chinese & Oriental Languages, 4(4):304-313.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An application of information theory in Chinese word segmentation", "authors": [ { "first": "Lua", "middle": [], "last": "Kt", "suffix": "" }, { "first": "G", "middle": [ "W" ], "last": "Gan", "suffix": "" } ], "year": 1994, "venue": "Computer Processing of Chinese & Oriental Languages", "volume": "8", "issue": "1", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lua KT and Gan GW. (1994). An application of information theory in Chinese word segmentation. Computer Processing of Chinese & Oriental Languages, 8(1):115-124.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic processing of Chinese words", "authors": [ { "first": "Y", "middle": [ "C" ], "last": "Wang", "suffix": "" }, { "first": "S", "middle": [ "U" ], "last": "Hj", "suffix": "" }, { "first": "Y", "middle": [], "last": "Mo", "suffix": "" } ], "year": 1990, "venue": "Journal of Chinese Information Processing", "volume": "4", "issue": "4", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang YC, SU HJ and Mo Y. (1990). Automatic processing of Chinese words. Journal of Chinese Information Processing. 4(4):1-11.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Chinese text segmentation for text retrieval: achievements and problems", "authors": [ { "first": "", "middle": [], "last": "Wu", "suffix": "" }, { "first": "G", "middle": [], "last": "Tseng", "suffix": "" } ], "year": 1993, "venue": "Journal of the American Society for Information Science", "volume": "44", "issue": "9", "pages": "532--542", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu JM and Tseng G. (1993). Chinese text segmentation for text retrieval: achievements and problems. Journal of the American Society for Information Science. 44(9):532-542.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The implementation of a written Chinese automatic segmentation expert system", "authors": [ { "first": "H", "middle": [], "last": "Xu", "suffix": "" }, { "first": "He", "middle": [], "last": "Kk", "suffix": "" }, { "first": "B", "middle": [], "last": "Sun", "suffix": "" } ], "year": 1991, "venue": "Journal of Chinese Information Processing", "volume": "5", "issue": "3", "pages": "38--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu H, He KK and Sun B. (1991) The implementation of a written Chinese automatic segmentation expert system, Journal of Chinese Information Processing, 5(3):38-47.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A rule-based Chinese automatic segmentation system", "authors": [ { "first": "T", "middle": [ "S" ], "last": "Yao", "suffix": "" }, { "first": "Zhang", "middle": [], "last": "Gp", "suffix": "" }, { "first": "Y", "middle": [ "M" ], "last": "Wu", "suffix": "" } ], "year": 1990, "venue": "Journal of Chinese Information Processing", "volume": "4", "issue": "1", "pages": "37--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yao TS, Zhang GP and Wu YM. (1990). A rule-based Chinese automatic segmentation system, Journal of Chinese Information Processing, 4(1):37-43.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Rule-based word identification for Mandarin Chinese sentences -A unification approach", "authors": [ { "first": "Yeh", "middle": [], "last": "Cl", "suffix": "" }, { "first": "H", "middle": [ "J" ], "last": "Lee", "suffix": "" } ], "year": 1995, "venue": "Computer Processing of Chinese & Oriental Languages", "volume": "9", "issue": "2", "pages": "97--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yeh CL and Lee HJ. (1995). Rule-based word identification for Mandarin Chinese sentences -A unification approach, Computer Processing of Chinese & Oriental Languages, 9(2):97-118.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A hybrid approach to unknown word detection and segmentation of Chinese, Chinese Processing of Chinese and Oriental Languages", "authors": [ { "first": "J", "middle": [ "Y" ], "last": "Nie", "suffix": "" }, { "first": "Jin", "middle": [ "Wy" ], "last": "", "suffix": "" }, { "first": "Marie-Louise", "middle": [], "last": "Hannan", "suffix": "" } ], "year": 1997, "venue": "", "volume": "11", "issue": "", "pages": "326--335", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nie JY, Jin WY and Marie-Louise Hannan. (1997). A hybrid approach to unknown word detection and segmentation of Chinese, Chinese Processing of Chinese and Oriental Languages, 11(4): pp326-335.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Identification of unknown word from a corpus, computer Processing of Chinese & Oriental Languages", "authors": [ { "first": "Tung", "middle": [], "last": "Ch", "suffix": "" }, { "first": "H", "middle": [ "J" ], "last": "Lee", "suffix": "" } ], "year": 1994, "venue": "", "volume": "8", "issue": "", "pages": "131--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tung CH and Lee HJ. (1994). Identification of unknown word from a corpus, computer Processing of Chinese & Oriental Languages, 8(Supplement):131-146.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A multi-corpus approach to recognition of proper names in Chinese Text", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" } ], "year": 1994, "venue": "Computer Processing of Chinese & Oriental Languages", "volume": "8", "issue": "1", "pages": "75--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang JS et al. (1994). A multi-corpus approach to recognition of proper names in Chinese Text, Computer Processing of Chinese & Oriental Languages, 8(1):75-86", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Identifying Chinese Names In Unrestricted Texts", "authors": [ { "first": "M", "middle": [ "S" ], "last": "Sun", "suffix": "" }, { "first": "C", "middle": [ "N" ], "last": "Huang", "suffix": "" }, { "first": "Gao", "middle": [], "last": "Hy", "suffix": "" }, { "first": "J", "middle": [], "last": "Fang", "suffix": "" } ], "year": 1994, "venue": "Communications of Chinese and Oriental Languages Information Processing Society", "volume": "4", "issue": "2", "pages": "113--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun MS, Huang CN, Gao HY and Fang J. (1994). Identifying Chinese Names In Unrestricted Texts, Communications of Chinese and Oriental Languages Information Processing Society, 4(2):113-122.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Detection of Unknown Chinese Words Using a Hybrid Approach", "authors": [ { "first": "G", "middle": [ "D" ], "last": "Zhou", "suffix": "" }, { "first": "K", "middle": [ "T" ], "last": "Lua", "suffix": "" } ], "year": 1997, "venue": "Computer Processing of Chinese & Oriental Language", "volume": "11", "issue": "1", "pages": "63--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou GD and Lua KT, (1997). Detection of Unknown Chinese Words Using a Hybrid Approach, Computer Processing of Chinese & Oriental Language, 11(1):63-75.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Statistical language learning", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak, Statistical language learning, The MIT Press, ISBN 0-262-03216-3", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Named Entity Recognition Using a HMM-based Chunk Tagger", "authors": [ { "first": "Zhou", "middle": [], "last": "Gdong", "suffix": "" }, { "first": "J", "middle": [], "last": "Su", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Conference on Annual Meeting for Computational Linguistics (ACL'2002", "volume": "", "issue": "", "pages": "473--480", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou GDong and Su J. (2002). Named Entity Recognition Using a HMM-based Chunk Tagger, Proceedings of the Conference on Annual Meeting for Computational Linguistics (ACL'2002). 473-480, Philadelphia.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition", "authors": [ { "first": "L", "middle": [], "last": "Rabiner", "suffix": "" } ], "year": 1989, "venue": "IEEE", "volume": "77", "issue": "2", "pages": "257--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rabiner L. 1989. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. IEEE 77(2), pages257-285.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm", "authors": [ { "first": "A", "middle": [ "J" ], "last": "Viterbi", "suffix": "" } ], "year": 1967, "venue": "IEEE Transactions on Information Theory, IT", "volume": "13", "issue": "2", "pages": "260--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "Viterbi A.J. 1967. Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm. IEEE Transactions on Information Theory, IT 13(2), 260-269.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Good-Turing frequency estimation without tears", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "G", "middle": [], "last": "Sampson", "suffix": "" } ], "year": 1995, "venue": "Journal of Quantitative Linguistics", "volume": "2", "issue": "", "pages": "217--237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gale W.A. and Sampson G. 1995. Good-Turing frequency estimation without tears. Journal of Quantitative Linguistics. 2:217-237.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Self-Organized Language Modeling for Speech Recognition", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "450--506", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jelinek F. (1989). Self-Organized Language Modeling for Speech Recognition. In Alex Waibel and Kai-Fu Lee(Editors). Readings in Speech Recognitiopn. Morgan Kaufmann. 450-506.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Katz", "suffix": "" } ], "year": 1987, "venue": "IEEE Transactions on Acoustics. Speech and Signal Processing", "volume": "35", "issue": "", "pages": "400--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katz S.M. (1987). Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer. IEEE Transactions on Acoustics. Speech and Signal Processing. 35: 400-401.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "An Empirical Study of Smoothing Technniques for Language Modeling", "authors": [ { "first": "Goodman", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 34th Annual Meeting of the Association of Computational Linguistics (ACL'1996)", "volume": "", "issue": "", "pages": "310--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen and Goodman. (1996). An Empirical Study of Smoothing Technniques for Language Modeling. In Proceedings of the 34th Annual Meeting of the Association of Computational Linguistics (ACL'1996). pp310-318. Santa Cruz, California, USA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A Maximum Entropy Model for Part-of-Speech Tagging", "authors": [ { "first": "A", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "133--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi A. (1996). A Maximum Entropy Model for Part-of-Speech Tagging. Proceedings of the Conference on Empirical Methods in Natural Language Processing., 133-142.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "HHMM-based Chinese Lexical Analyzer ICTCLAS", "authors": [ { "first": "H", "middle": [ "P" ], "last": "Zhang", "suffix": "" }, { "first": "H", "middle": [ "K" ], "last": "Yu", "suffix": "" }, { "first": "Xiong", "middle": [], "last": "Dy", "suffix": "" }, { "first": "Q", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of 2 nd SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "184--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang HP, Yu HK, Xiong DY and Liu Q. (2003). HHMM-based Chinese Lexical Analyzer ICTCLAS. Proceedings of 2 nd SIGHAN Workshop on Chinese Language Processing. 184-187. Sapporo, Japan.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Chinese Word Segmentation in MSR-NLP", "authors": [ { "first": "A", "middle": [ "D" ], "last": "Wu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of 2 nd SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "172--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu AD. (2003). Chinese Word Segmentation in MSR-NLP. Proceedings of 2 nd SIGHAN Workshop on Chinese Language Processing. 172-175. Sapporo, Japan.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Chinese Word Segmentation Using Minimal Linguistic Knowledge", "authors": [ { "first": "A", "middle": [ "T" ], "last": "Chen", "suffix": "" } ], "year": 2003, "venue": "Proceedings of 2 nd SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "148--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen AT. (2003). Chinese Word Segmentation Using Minimal Linguistic Knowledge. Proceedings of 2 nd SIGHAN Workshop on Chinese Language Processing. 148-151. Sapporo, Japan.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Chinese Word Segmentation at Peking University", "authors": [ { "first": "H", "middle": [ "M" ], "last": "Duan", "suffix": "" }, { "first": "X", "middle": [ "J" ], "last": "Bai", "suffix": "" }, { "first": "Chang", "middle": [], "last": "Bb", "suffix": "" }, { "first": "S", "middle": [ "W" ], "last": "Yu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of 2 nd SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "152--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duan HM, Bai XJ, Chang BB and Yu SW. (2003). Chinese Word Segmentation at Peking University. Proceedings of 2 nd SIGHAN Workshop on Chinese Language Processing. 152- 155. Sapporo, Japan.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "content": "
this paper, we use a discriminative Markov model, called Mutual Information
Independence Model (MIIM) proposed by Zhou et al [17] 4 , in unknown word
detection by chunking. MIIM is derived from a conditional probability model. Given
an observation sequencen 1 = Oo 1o2Lon, the goal of a conditional probability model
is to find a stochastic optimal state(tag) sequenceSn 1 =s 1s2Lsnthat maximizes:
logP(S1 n|1 n O)=logP(S1 n)+logP) 1 n S 1 ( n S P ( \u22c5( 1 n O ) 1 n O P ,)
", "type_str": "table", "num": null, "text": "" }, "TABREF1": { "html": null, "content": "
thei \u2212thword atom in the sequence of word
atoms ; o The length of i n n w w w W L 2 1 1 = w
o The occurring frequency feature ofi w , which is mapped to
max(log(Frequency), 9 ).
\u2022
", "type_str": "table", "num": null, "text": "The percentage of i w occurring at the end of other words (round to 10%) : the states are used to bracket and differentiate various types of words. In this way, Chinese unknown word detection can be regarded as a bracketing process while differentiation of different word types can help the bracketing process. i s is structural and consists of three parts:o Boundary Category (B): it includes four values: {O, B, M, E}, where O means that current word atom is a whOle word and B/M/E means that current word atom is at the Beginning/in the Middle/at the End of a word. It is used to denote the class of the word. In our system, words are classified into two types: pure Chinese word type and mixed word type (i.e. including English characters and Chinese digits/numbers/symbols). Because of the limited number of boundary and word categories, the word atom formation pattern described above is added into the structural state to represent a more accurate state transition model in MIIM while keeping its output model." }, "TABREF4": { "html": null, "content": "
CorpusAbbreviation OOV Training DataTest Data
Beijing UniversityPK6.9% 1100K words 17K words
UPENN Chinese TreebankCTB18.1% 250K words40K words
", "type_str": "table", "num": null, "text": "Statistics of the corpora used in our evaluation" }, "TABREF5": { "html": null, "content": "
CorpusPRFROOVRIV
PK93.596.194.880.597.3
CTB90.590.190.377.692.9
", "type_str": "table", "num": null, "text": "Detailed performance of our system on the 1 st SIGHAN Chinese word segmentation benchmark data" }, "TABREF6": { "html": null, "content": "
CorpusPRFROOVRIV
Ours93.596.194.880.597.3
Zhang et al [25]94.096.295.172.497.9
Wu [26]93.895.594.768.097.6
Chen [27]93.895.594.664.797.7
CorpusPRFROOVRIV
Ours90.590.190.377.692.9
Zhang et al [25]87.588.688.170.592.7
Duan et al [28]85.689.287.464.494.7
", "type_str": "table", "num": null, "text": "Comparison of our system with other best-reported systems on the PK corpus Comparison of our system with other best-reported systems on the CTB corpus" }, "TABREF7": { "html": null, "content": "
number of possible contexts: 4836K)
Approach#useful contextsFROOVRIV
Error-Driven Learning98K94.880.597.3
Frequency Filtering98K (63K)94.272.797.4
Frequency Filtering (best performance)250K (90K)94.780.297.3
Frequency Filtering400K (94K)94.679.197.1
Approach#useful contextsFROOVRIV
Error-Driven Learning43K90.377.692.9
Frequency Filtering43K (21K)89.572.192.8
Frequency Filtering (best performance)150K90.176.193.0
Frequency Filtering400K (40K)89.975.892.9
", "type_str": "table", "num": null, "text": "Comparison of the error-driven learning approach with the frequency filtering approach in learning useful contexts for the output model of MIIM on the PK corpus (Total Comparison of the error-driven learning approach with the frequency filtering approach in learning useful contexts for the output model of MIIM on the CTB corpus (Total number of possible contexts: 1038K)" } } } }