{ "paper_id": "O03-5001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:01:32.903461Z" }, "title": "A Class-based Language Model Approach to Chinese Named Entity Identification 1", "authors": [ { "first": "Jian", "middle": [], "last": "Sun", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing University of Posts&Telecommunications", "location": {} }, "email": "sunjian@ict.ac.cn" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "", "affiliation": {}, "email": "mingzhou@microsoft.com" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "", "affiliation": {}, "email": "jfgao@microsoft.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a method of Chinese named entity (NE) identification using a class-based language model (LM). Our NE identification concentrates on three types of NEs, namely, personal names (PERs), location names (LOCs) and organization names (ORGs). Each type of NE is defined as a class. Our language model consists of two sub-models: (1) a set of entity models, each of which estimates the generative probability of a Chinese character string given an NE class; and (2) a contextual model, which estimates the generative probability of a class sequence. The class-based LM thus provides a statistical framework for incorporating Chinese word segmentation and NE identification in a unified way. This paper also describes methods for identifying nested NEs and NE abbreviations. Evaluation based on a test data with broad coverage shows that the proposed model achieves the performance of state-of-the-art Chinese NE identification systems.", "pdf_parse": { "paper_id": "O03-5001", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a method of Chinese named entity (NE) identification using a class-based language model (LM). Our NE identification concentrates on three types of NEs, namely, personal names (PERs), location names (LOCs) and organization names (ORGs). Each type of NE is defined as a class. Our language model consists of two sub-models: (1) a set of entity models, each of which estimates the generative probability of a Chinese character string given an NE class; and (2) a contextual model, which estimates the generative probability of a class sequence. The class-based LM thus provides a statistical framework for incorporating Chinese word segmentation and NE identification in a unified way. This paper also describes methods for identifying nested NEs and NE abbreviations. Evaluation based on a test data with broad coverage shows that the proposed model achieves the performance of state-of-the-art Chinese NE identification systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Named Entity (NE) identification is the problem of detecting entity names in documents and then classifying them into corresponding categories. This is an important step in many natural language processing applications, such as information extraction (IE), question answering (QA), and machine translation (MT). A lot of researches have been carried out on English NE identification. As a result, some systems have been widely applied in practice. On the other hand, Chinese NE identification is a different task because in Chinese, there is no space to mark the boundaries of words and no clear definition of words. In addition, Chinese NE 1 This work was done while the author was visiting Microsoft Research Asia.", "cite_spans": [ { "start": 641, "end": 642, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "identification is intertwined with word segmentation. Traditional approaches to Chinese NE identification usually employ two separate steps, namely, word segmentation and NE identification. As a result, errors in word segmentation will lead to errors in NE identification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Moreover, the identification of NE abbreviations and nested NEs has not yet been investigated thoroughly in previous works. For example, nested locations in organization names have not been discussed at the Message Understanding Conference (MUC).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we present a method of Chinese NE identification using a class-based LM, in which the definitions of classes are extended in comparison with our previous work [Sun, Gao et al., 2002] . The model consists of two sub-models: (1) a set of entity models, each of which estimates the generative probability of a Chinese character string given an NE class;", "cite_spans": [ { "start": 174, "end": 197, "text": "[Sun, Gao et al., 2002]", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "and (2) a contextual model which estimates the generative probability of a class sequence. Our model thus provides a statistical framework for incorporating Chinese word segmentation and NE identification in a unified way. In the paper, we shall also describe our methods for identifying nested NEs and NE abbreviations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of this paper is organized as follows: Section 2 briefly discusses related work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Section 3 presents in detail the class-based LM for Chinese NE identification. Section 4 discusses our methods of identifying NE abbreviations. Section 5 reports experimental results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Section 6 presents conclusions and future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Traditionally, the approaches to NE identification have been rule-based. They attempt to perform matching against a sequence of words in much the same way that a general regular expression matcher does. Some of these systems are, FACILE ], IsoQuest's NetOwl [Krupha and Hausman, 1998 ], the LTG system [Mikheev et al., 1998 ], the NTU system [Chen et al., 1998 ], LaSIE [Humphreys et al., 1998 ], the Oki system [Fukumoto et al., 1998 ], and the Proteus system [Grishman, 1995] . However, the rule-based approaches are neither robust nor portable.", "cite_spans": [ { "start": 258, "end": 283, "text": "[Krupha and Hausman, 1998", "ref_id": null }, { "start": 302, "end": 323, "text": "[Mikheev et al., 1998", "ref_id": "BIBREF28" }, { "start": 342, "end": 360, "text": "[Chen et al., 1998", "ref_id": "BIBREF10" }, { "start": 370, "end": 393, "text": "[Humphreys et al., 1998", "ref_id": "BIBREF22" }, { "start": 412, "end": 434, "text": "[Fukumoto et al., 1998", "ref_id": "BIBREF18" }, { "start": 461, "end": 477, "text": "[Grishman, 1995]", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Recently, research on NE identification has focused on machine learning approaches, including the hidden Markov model [Bikel et al., 1999; Miller et al., 1998; Gotoh and Renals, 2000; Sun et al., 2002; Zhou and Su, 2002] , maximum entropy model [Borthwick, 1999] , decision tree [Sekine et al., 1998 ], transformation-based learning [Brill, 1995; Aberdeen et al., 1995; Black and Vasilakopoulos, 2002] , boosting [Collins, 2002; Carreras et al., 2002; Tsukamoto et al., 2002; Wu et al., 2002] , the voted perceptron [Collins, 2002] , conditional Markov model [Jansche, 2002] , support vector machine [McNamee and Mayfield, 2002; Takeuchi and Collier, 2002] , memory-based learning [Sang, 2002] and learning approaches stacking [Florian, 2002] . Some systems, especially those for English NE identification, have been applied to practical applications.", "cite_spans": [ { "start": 118, "end": 138, "text": "[Bikel et al., 1999;", "ref_id": "BIBREF5" }, { "start": 139, "end": 159, "text": "Miller et al., 1998;", "ref_id": "BIBREF29" }, { "start": 160, "end": 183, "text": "Gotoh and Renals, 2000;", "ref_id": "BIBREF20" }, { "start": 184, "end": 201, "text": "Sun et al., 2002;", "ref_id": "BIBREF35" }, { "start": 202, "end": 220, "text": "Zhou and Su, 2002]", "ref_id": null }, { "start": 245, "end": 262, "text": "[Borthwick, 1999]", "ref_id": "BIBREF4" }, { "start": 279, "end": 299, "text": "[Sekine et al., 1998", "ref_id": "BIBREF32" }, { "start": 333, "end": 346, "text": "[Brill, 1995;", "ref_id": "BIBREF7" }, { "start": 347, "end": 369, "text": "Aberdeen et al., 1995;", "ref_id": "BIBREF0" }, { "start": 370, "end": 401, "text": "Black and Vasilakopoulos, 2002]", "ref_id": "BIBREF3" }, { "start": 413, "end": 428, "text": "[Collins, 2002;", "ref_id": "BIBREF16" }, { "start": 429, "end": 451, "text": "Carreras et al., 2002;", "ref_id": "BIBREF8" }, { "start": 452, "end": 475, "text": "Tsukamoto et al., 2002;", "ref_id": "BIBREF39" }, { "start": 476, "end": 492, "text": "Wu et al., 2002]", "ref_id": "BIBREF41" }, { "start": 516, "end": 531, "text": "[Collins, 2002]", "ref_id": "BIBREF16" }, { "start": 559, "end": 574, "text": "[Jansche, 2002]", "ref_id": "BIBREF23" }, { "start": 600, "end": 628, "text": "[McNamee and Mayfield, 2002;", "ref_id": "BIBREF27" }, { "start": 629, "end": 656, "text": "Takeuchi and Collier, 2002]", "ref_id": "BIBREF37" }, { "start": 681, "end": 693, "text": "[Sang, 2002]", "ref_id": "BIBREF31" }, { "start": 727, "end": 742, "text": "[Florian, 2002]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "When it comes to the Chinese language, however, NE identification systems still cannot achieve satisfactory performance. Some representative systems include those developed in [Sun et al., 1994; Chen and Lee, 1994; Chen et al., 1998; Yu et al., 1998; Zhang, 2001; Sun et al., 2002] .", "cite_spans": [ { "start": 176, "end": 194, "text": "[Sun et al., 1994;", "ref_id": "BIBREF36" }, { "start": 195, "end": 214, "text": "Chen and Lee, 1994;", "ref_id": "BIBREF11" }, { "start": 215, "end": 233, "text": "Chen et al., 1998;", "ref_id": "BIBREF10" }, { "start": 234, "end": 250, "text": "Yu et al., 1998;", "ref_id": "BIBREF42" }, { "start": 251, "end": 263, "text": "Zhang, 2001;", "ref_id": "BIBREF43" }, { "start": 264, "end": 281, "text": "Sun et al., 2002]", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "We will mainly introduce two systems, namely, the rule-based NTU system for Chinese [Chen et al., 1998 ] and the machine learning based BBN system [Bikel et al., 1999] , because these are representative of the two different approaches.", "cite_spans": [ { "start": 84, "end": 102, "text": "[Chen et al., 1998", "ref_id": "BIBREF10" }, { "start": 147, "end": 167, "text": "[Bikel et al., 1999]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Generally speaking, the NTU system employs the rule-based method. It utilizes different types of information and models, including character conditions, statistic information, titles, punctuation marks, organization and location keywords, speech-act and locative verbs, cache model and n-gram model. Different kinds of NEs employ different rules. For example, one rule for identifying organization names is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "CountryName OrganizationNameKeyword e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OrganizationName", "sec_num": null }, { "text": "NEs are identified in the following steps: (1) segment text into a sequence of tokens; 2identify named persons;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u56fd \u9986 US Embassy", "sec_num": null }, { "text": "(3) identify named organizations; (4) identify named locations; and 5use an n-gram model to identity named organizations/locations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u56fd \u9986 US Embassy", "sec_num": null }, { "text": "The BBN model [Bikel et al., 1999] , a variant of Hidden Markov Model (HMM), views NE identification as a classification problem and assigns to every word either one of the desired NE classes or the label NOT-A-NAME, meaning \"none of the desired class\". The HMM has a bigram LM of each NE class and other text. Another characteristic is that every word is a two-element vector consisting of the word itself and the word-feature. Given the model, the generation of words and name-classes is performed in three steps: (1) select a name-class; (2) generate the first word inside that name-class;", "cite_spans": [ { "start": 14, "end": 34, "text": "[Bikel et al., 1999]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "\u56fd \u9986 US Embassy", "sec_num": null }, { "text": "(3) generate all the subsequent words inside the current name-class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u56fd \u9986 US Embassy", "sec_num": null }, { "text": "There have been relatively fewer attempts to deal with NE abbreviations [Chen, 1996; . These researches mainly investigated the recovery of acronyms and non-standard words.", "cite_spans": [ { "start": 72, "end": 84, "text": "[Chen, 1996;", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "\u56fd \u9986 US Embassy", "sec_num": null }, { "text": "In this paper, we present a method of Chinese NE identification using a class-based LM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u56fd \u9986 US Embassy", "sec_num": null }, { "text": "We also describe our methods of identifying nested NEs and NE abbreviations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u56fd \u9986 US Embassy", "sec_num": null }, { "text": "A word-based n-gram LM is a stochastic model which predicts a word given the previous n-1 words by estimating the conditional probability P(w n |w 1 \u2026w n-1 ). A class-based LM extends the word-based LM by defining similar words as a class. It has been demonstrated to be a more effective way of dealing with the data-sparseness problem. In this study, the class-based LM is applied to integrate Chinese word segmentation and NE identification in a unified framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based LM Approach to NE Identification", "sec_num": "3." }, { "text": "In this section, we first gives definitions of classes. Then, we describe the elements of the class-based LM, parameter estimation, and how we apply the model to NE identification. For each NE type (PER, LOC, and ORG), we define 6 tags to mark the position of the current character (word) in the entity name as shown in Table 2 . ", "cite_spans": [], "ref_spans": [ { "start": 320, "end": 327, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Class-based LM Approach to NE Identification", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") | ( max ar\u011d 1 1 1 n m C m S C P C = ) | ( ) ( max arg 1 1 1 m n m C C S P C P \u00d7 = . (1) \u220f = \u2212 \u2212 \u2245 m i i i i m c c c P C P 1 1 2 1 ) | ( ) (", "eq_num": "(2)" } ], "section": "Class-based LM Approach to NE Identification", "sec_num": "3." }, { "text": "The entity model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based LM Approach to NE Identification", "sec_num": "3." }, { "text": ") | ( 1 1 m n C S P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based LM Approach to NE Identification", "sec_num": "3." }, { "text": "estimates the generative probability of a Chinese character sequence given an NE class, as shown in Equation 3 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based LM Approach to NE Identification", "sec_num": "3." }, { "text": ") | ] ([ ) | ] [ ] ([ ) | ( ) | ( 1 (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based LM Approach to NE Identification", "sec_num": "3." }, { "text": "By combining the contextual model and the entity models as in Equation 1 The computation of the joint probability of the two events (the input sentence and the hidden class sequence) is shown in the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based LM Approach to NE Identification", "sec_num": "3." }, { "text": ") , | ( ) , | ( ) , | ( ) , | ( ) , | ( ) , | ( ) , | ( ) | ( ) , | ( ) 3 | ( ) | 3 ( ) | ( EOS P P P P P PT P PT PER P PT P PER BOS PT P PER P PER PER P BOS PER P \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 where ) 3 | ( PER P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based LM Approach to NE Identification", "sec_num": "3." }, { "text": "will be described in Section 3.3.1. It should be noted that the computations of the generative probability of the two occurrences of \u603b are different. The first one is generated as the class PT, whereas the second is generated as the common word \u603b .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based LM Approach to NE Identification", "sec_num": "3." }, { "text": "In Section 3.3, we will describe the entity models in detail, and in Section 3.4, we will present our model estimation approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based LM Approach to NE Identification", "sec_num": "3." }, { "text": "In order to discriminate among the first, medial and last character in an NE, we design the entity models in such a way that the character (or word) position is utilized. For each kind of NE, different entity models are adopted as described below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Models", "sec_num": "3.3" }, { "text": "A Class-based Language Model Approach to Chinese Named Entity Identification", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Models", "sec_num": "3.3" }, { "text": "For the class PER (including FN, PER1, PER2, and PER3), the entity model is a character-based trigram model. The modeling of PER3 is described in the following example. As shown in Figure 1 , the generative probability of the Chinese character sequence given the PER3 class is computed as follows:", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 189, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Person Model", "sec_num": "3.3.1" }, { "text": ") , , 3 | ( ) , , 3 | ( ) , , 3 | ( ) , , 3 | ( ) , , 3 | ( ) , , 3 | ( ) , 3 | ( ) 3 | ( 3 2 3 2 1 2 1 1 3 2 1 s PL PER PE P PL s PER s P s PI PER PL P PI s PER s P s PF PER PI P PF PB PER s P PB PER PF P PER c s s s P \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 = = (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Person Model", "sec_num": "3.3.1" }, { "text": "For example, the generative probability of \u6765 'Zhou Enlai' can be expressed as ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Person Model", "sec_num": "3.3.1" }, { "text": ") , , 3 | ( ) , , 3 | ( ) , , 3 | ( ) , , 3 | ( ) , , 3 | ( ) , , 3 | ( ) , 3 | ( ) 3 |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Person Model", "sec_num": "3.3.1" }, { "text": "\u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Person Model", "sec_num": "3.3.1" }, { "text": "The FN, PER1, and PER2 are modeled in similar ways. Each class of FN, PER1, PER2, and PER3 corresponds to an entity model for a kind of personal names. But in the contextual model, the four classes correspond to one class (PER).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Person Model", "sec_num": "3.3.1" }, { "text": "For the class LOCW, the entity model is a word-based trigram model. If the last word in the candidate location name is a location keyword, it can be generalized as class LK, which is also modeled in the form of a unigram. For example, the generative probability of 'Beijing City' in the location model can be expressed as: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Location Model", "sec_num": "3.3.2" }, { "text": ") , , | ( ) | ( ) , , | ( ) , , | ( ) , , | ( ) , | ( ) |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Location Model", "sec_num": "3.3.2" }, { "text": "For the class ORG, the entity model is a class-based trigram model. Personal names and location names nested in ORG are generalized as classes PER and LOC, respectively. Thus, we can identify nested personal names and location names using the class-based model. The organization keyword in the ORG is also generalized as the OK class, which is modeled in the form of a unigram.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organization Model", "sec_num": "3.3.3" }, { "text": "It is obvious that personal titles and special verbs are important clues for identifying personal names (e.g., [Chen et al., 1998] ). In our study, personal titles and special verbs are adopted to help identify personal names by constructing a unigram model of PT and a unigram model of PV. Accordingly, the generative probability of a specific personal title w i can be computed as", "cite_spans": [ { "start": 111, "end": 130, "text": "[Chen et al., 1998]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Other Models", "sec_num": "3.3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") | ( PT c w P i =", "eq_num": "(5)" } ], "section": "Other Models", "sec_num": "3.3.4" }, { "text": "and that of a specific speech-act verb w i can be computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Models", "sec_num": "3.3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") | ( PV c w P i =", "eq_num": "(6)" } ], "section": "Other Models", "sec_num": "3.3.4" }, { "text": "We can also build unigram models for classes LK and OK in similar ways, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Models", "sec_num": "3.3.4" }, { "text": "In addition, if c is a word that does not belong to the above defined classes, the generative probability is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Models", "sec_num": "3.3.4" }, { "text": "1 ) | ... ( = \u2212 \u2212 c s s P end c start c (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Models", "sec_num": "3.3.4" }, { "text": "where the Chinese character sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Models", "sec_num": "3.3.4" }, { "text": "end c start c s s \u2212 \u2212 ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Models", "sec_num": "3.3.4" }, { "text": "is a single word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Models", "sec_num": "3.3.4" }, { "text": "As discussed in Section 3.2, there are two probabilities to be estimated, ) ( 1 m C P and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Estimation", "sec_num": "3.4" }, { "text": ") | ( 1 1 m n C S P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Estimation", "sec_num": "3.4" }, { "text": ". Both of them are estimated using maximum likelihood estimation (MLE) based on the training data, which are obtained by tagging the NEs in the text using the parser A Class-based Language Model Approach to Chinese Named Entity Identification NLPWin 3 . Smoothing the MLE is essential to avoid zero probability for events that were not observed in the training data. We apply the standard techniques, in which more specific models are smoothed with progressively less specific models. The details of the back-off smoothing method we use are described in [Gao et al., 2001] .", "cite_spans": [ { "start": 554, "end": 572, "text": "[Gao et al., 2001]", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Model Estimation", "sec_num": "3.4" }, { "text": "In what follows, we will describe our model estimation approach. We will assume that a sample training data set has one sentence: \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Estimation", "sec_num": "3.4" }, { "text": "\u6765 \u603b \u4eec \u603b \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Estimation", "sec_num": "3.4" }, { "text": "The corresponding annotated training data 4 are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Estimation", "sec_num": "3.4" }, { "text": "We extract training data for the contextual model by replacing the names in the above example with corresponding class tags, i.e., PER PT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Model Estimation", "sec_num": "3.4.1" }, { "text": ". The contextual model parameters are computed by using MLE together with back-off smoothing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u4eec \u603b", "sec_num": null }, { "text": "We can also obtain the training data of each entity model. For example, the PER3 list we obtained from the above example has one instance, \u6765 . The corresponding training data for PER3, where position tags are introduced, are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Model Estimation", "sec_num": "3.4.2" }, { "text": "PB PF PI PL \u6765 PE .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Model Estimation", "sec_num": "3.4.2" }, { "text": "The model parameters of PER3 are computed using MLE and back-off smoothing. We can also estimate other entity models in a similar way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Model Estimation", "sec_num": "3.4.2" }, { "text": "The NE identification procedure is as follows: (1) identify PERs and LOCs; (2) identify ORGs based on the output of identifying PERs and LOCs. Thus, the PERs and LOCs nested in ORGs can be identified. Since the steps involved in identifying PERs and LOCs, and those involved in identifying ORGs are similar, we will only describe the former in the following.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "Generally speaking, the decoding process consists of three steps: lexical word candidate generation, NE candidate generation, and Viterbi search. A few heuristics and NE grammars, shown in Figure 2 , are used to reduce the search space when NE candidates are generated. Given a sequence of Chinese characters, the decoding process is as follows:", "cite_spans": [], "ref_spans": [ { "start": 189, "end": 197, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "Step 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "Lexical word candidate generation. All possible word segmentations are generated according to a Chinese lexicon containing 120,050 entries. The lexicon, in which each entry does not contain the NE tags even if it is a PER, LOC or ORG, is only used for segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "Step 2: NE candidate generation. NE candidates are generated in two steps: (1) candidates are generated according to NE grammars; (2) each candidate is assigned a probability by using the corresponding entity model. Two kinds of heuristic information, namely, internal information and contextual information, are used for a more effective search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "The internal information, which is used as an NE candidate trigger, includes: (1) a Chinese family name list, containing 373 entries (e.g., 'Zhou', 'Li');", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "(2) a transliterated name character list, containing 618 characters (e.g., 'shi', \u987f 'dun').", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "The contextual information used for computing the generative probability includes: (1) a list of personal title, containing 219 entries (e.g., \u603b 'premier');", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "(2) a list of speech-act verbs, containing 9191 entries (e.g., 'point out');", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "(3) the left and right words of the PER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "Step 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "Viterbi Search. Viterbi search is used to select the hypothesis with the highest probability as the best output, from which PERs and LOCs can be obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "For the identification of ORGs, the organization keyword list (containing 1,355 entries)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "is utilized both to generate candidates and to compute generative probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.5" }, { "text": "NEs with the same meaning, which often occur more than once in a document, are likely to appear in different expressions. For example, the entity names \" \u5b66\" (Peking university)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Chinese NE Abbreviations", "sec_num": "4." }, { "text": "and \" \" (an abbreviation of \" \u5b66\") might occur in different sentences in the same document. In this case, the whole name may be identified correctly, whereas its abbreviation may not be. NE abbreviations account for about 10 percent of Chinese NEs. Therefore, identifying NE abbreviations is essential for improving the performance of Chinese NE identification. To the best of our knowledge, there has been no systematic study on this topic up to now. In this study, we applied the language model method to the task. We adopted the language model because the identification of NE abbreviations can be easily incorporated into the class-based LM framework described in Section 3. Furthermore, doing so lessens the labor required to develop rules for NE abbreviations. After a whole NE name has been identified, the procedure for identifying NE abbreviations is as follows: (1) generate all the candidates of NE abbreviations according to the corresponding generation pattern;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Chinese NE Abbreviations", "sec_num": "4." }, { "text": "(2) assign to each one a generative probability (or score) by using the corresponding model; (3) store the candidates in the lattice for Viterbi search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Chinese NE Abbreviations", "sec_num": "4." }, { "text": "In Sections 4.1 to 4.3, we will describe the abbreviation models applied to abbreviations of personal names, location names, and organization names, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Chinese NE Abbreviations", "sec_num": "4." }, { "text": "Suppose that the whole name of PER s 1 s 2 s 3 has been identified; we generate two kinds of is estimated from the cache belonging to the PER class. At any given time during the NE identification task, the cache for a specific class contains NEs that have been identified as belonging to that class. After the abbreviation candidates are generated, they are stored in the lattice for search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Chinese PER Abbreviation 5", "sec_num": "4.1" }, { "text": "The LOC abbreviation (LABB) entity model is a unigram model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling LOC Abbreviations", "sec_num": "4.2" }, { "text": ") | ( LABB c s P = .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling LOC Abbreviations", "sec_num": "4.2" }, { "text": "The procedure of identifying location abbreviations can be described as follows: (1) generate LABB candidates according to the list of location abbreviations; (2) determine whether the candidates are LABB or not based on the contextual model. For example, the generative probability P( \u5173 ) for the sequence \u5173 'Sino-Japan relations' is computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling LOC Abbreviations", "sec_num": "4.2" }, { "text": ") , LABB | EOS ( P ) LABB , LABB | ( P ) LABB | ( P ) LABB , BOS | LABB ( P ) LABB | ( P ) BOS | LABB ( P ) ( P \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling LOC Abbreviations", "sec_num": "4.2" }, { "text": "When an organization name A = w 1 w 2 \u2026w N is recognized, all the abbreviation candidates of the organization are generated according to the patterns shown in Table 3 . Since there are no training data for the ORG abbreviation model, it is impossible to estimate the model parameters. We then utilize linguistic knowledge of abbreviation generation and construct a score function for the ORG abbreviation candidates. The score function is defined such that the resulting scores of the ORG abbreviation candidates are comparable to other NE candidates whose parameters (probabilities) are assigned using the probabilistic models described in Section 3.3.", "cite_spans": [], "ref_spans": [ { "start": 159, "end": 166, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Empirical Modeling of ORG Abbreviations", "sec_num": "4.3" }, { "text": "The following is an example used to explain how a score is assigned. Suppose that \u90ae\u7535 \u5b66 'Beijing University of Posts & Telecommunications' has been identified as an ORG in the previous part in the text, and that one of the ORG abbreviation candidates is \u90ae.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. Generation Patterns 6 of Organization Abbreviations", "sec_num": null }, { "text": "The generative probability of \u90ae\u7535 \u5b66 (P( \u90ae\u7535 \u5b66|ORG)) in the ORG model and that of \u90ae P( \u90ae|Contextual Model) in the contextual model can be computed. We calculate the score of \u90ae in the organization abbreviation model (denoted as Score( \u90ae |ORG abbr) ) as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. Generation Patterns 6 of Organization Abbreviations", "sec_num": null }, { "text": ")) Model contextual | ( P ) ( ) ORG | ( P \u00d7 \u2212 + \u00d7 \u03b1 \u03b1 1 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. Generation Patterns 6 of Organization Abbreviations", "sec_num": null }, { "text": "where \u03b1 is set to be 0.5. In addition, according to intuition, the score of \u90ae in the organization abbreviation model is larger than the probability of \u90ae in the contextual model given that \u90ae\u7535 \u5b66 has been identified as an ORG, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. Generation Patterns 6 of Organization Abbreviations", "sec_num": null }, { "text": "Contextual | P( abbr) ORG | Score( \u2265 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model)", "sec_num": null }, { "text": "Accordingly, a maximum function is used. Figures 3.1 and 3 .2 show the state transition in the lattice of the input sequence (e.g., \u90ae).", "cite_spans": [], "ref_spans": [ { "start": 41, "end": 58, "text": "Figures 3.1 and 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model)", "sec_num": null }, { "text": "To sum up, given an identified organization name A = w 1 w 2 \u2026w N , the score of a candidate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model)", "sec_num": null }, { "text": "t-1 t t+1 t+2 t-1 t t+1 t+2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model)", "sec_num": null }, { "text": "ORG Abbr (where N is the number of words (or characters)) is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model)", "sec_num": null }, { "text": "1 1 1 2 1 ( | abbr) max( ( | Model), ( | ) (1 ) ( | Model)) N N N N Score J ORG P J Contextual P w w w ORG P J Contextual \u03b1 \u03b1 \u2245 \u00d7 + \u2212\u00d7 (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model)", "sec_num": null }, { "text": "where \u03b1 is set to be 0.5. After the abbreviation candidates are generated, they will be added into the lattice for search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model)", "sec_num": null }, { "text": "We conducted evaluations in terms of the precision (P) and recall (R): There is one difference between Multilingual Entity Task (MET) evaluation and our evaluation. Nested NEs are evaluated in our system, whereas they are not in MET.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Measures", "sec_num": "5.1" }, { "text": "The training corpus was taken from the People's Daily [year 1997 and year 1998 ]. The annotated training data set, parsed using NLPWin, contained 1,152,676 sentences (90,427k bytes). The training data set contained noises for two reasons. First, the NE guidelines used by NLPWin are slightly different from the ones we used. For example, in our output 7 of NLPWin, (Beijing City) was tagged as , while was tagged as LOC according to our guidelines. Second, there were errors in the parsing results. Therefore, we utilized 18 rules to correct the data. One of these rules is LN LocationKeyword LN, which denotes that a location name and an adjacent location keyword are united into a location name. The following table shows some differences between parsing results and correct annotations according to our guidelines: ", "cite_spans": [ { "start": 54, "end": 78, "text": "[year 1997 and year 1998", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "5.2.1" }, { "text": "The statistics of the training data are shown in Table 5 . We developed a large open test data based on our guidelines 8 . As shown in Table 6 , the data set, which was balanced in terms of domain, style and time, contained approximately half a million Chinese characters. The test set contained 11,844 sentences, 49.84% of which contain at least one NE token. Note that the open-test data set was much larger than the MET test data set (the numbers of PERs, LOCs, and ORGs were 174, 750, and 377, respectively). The numbers of abbreviations of PERs, LOCs, and ORGs in the open-test data set were 367, 729, and 475, respectively.", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 56, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 135, "end": 142, "text": "Table 6", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "\u536b ", "sec_num": null }, { "text": "We conducted a baseline experiment, which consisted of two steps: parsing the test data using NLPWin; correcting the errors according to the rules. The performance achieved is shown in Table 7 . ", "cite_spans": [], "ref_spans": [ { "start": 185, "end": 192, "text": "Table 7", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Baseline NLPWin Performance", "sec_num": "5.3" }, { "text": "In order to investigate the contribution of the unified framework, heuristic information and the identification of NE abbreviations, the following experiments were conducted using our NE identification system:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.4" }, { "text": "(1) Experiments 1, 2 and 3 examined the contribution of the heuristics and unified framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.4" }, { "text": "(2) Experiments 4, 5 and 6 tested the performance of the system using our method of NE abbreviations identification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.4" }, { "text": "(3) Experiment 7 compared the performance of identifying whole NEs and that of identifying NE abbreviations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.4" }, { "text": "Experiment 1 was performed to examine the performance of a basic class-based model, in which no heuristic information was employed in the decoder in the unified framework. Experiment 2 examined the performance of a traditional method, which consisted of two separate steps: segmenting the sentence and recognizing NEs. In the segmentation step, we searched for the word with the maximal length in the lexicon to split the input character string 10 . Heuristic information was employed in this experiment. Experiment 3 investigated the performance of the unified framework, where the unified framework and heuristic information were adopted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments 1, 2 and 3: The contribution of the heuristics and unified framework", "sec_num": "5.4.1" }, { "text": "A comparison of the results of Experiment 1 and Experiment 3, which aims to show the contribution of heuristic information, is shown in Table 8 . A comparison of the results of Experiment 2 and Experiment 3, which aims to show the contribution of the unified method, is shown in Table 9 . 10 Every Chinese character in the input string, which can be seen as a single character word, is also added into the segmentation lattice. We save the minimal length segmentation in the lattice so that the character-based model (for PER) can be applied. 11 Exp.1 means the results of Experiment 1 and so on From Table 8 , we observed that after the introduction of heuristic information, the precision of PER increased from 66.52% to 81.24%, that of ORG from 37.12% to 75.90%. We also noticed that the recall of PER from 77.82% to 83.66%, that of ORG from 45.58% to 47.58%. Therefore, the heuristic information was an important knowledge resource for recognizing NEs.", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 143, "text": "Table 8", "ref_id": "TABREF12" }, { "start": 279, "end": 286, "text": "Table 9", "ref_id": null }, { "start": 601, "end": 608, "text": "Table 8", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Experiments 1, 2 and 3: The contribution of the heuristics and unified framework", "sec_num": "5.4.1" }, { "text": "From Table 9 , we find that the precision and recall of PER, LOC and ORG all improved as a result of the combining word segmentation with NE identification. For instance, the precision of PER increased from 80.17% to 81.24%, and the recall from 82.22% to 83.66%. Therefore, we can conclude that the unified framework for NE identification was a more effective method.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Experiments 1, 2 and 3: The contribution of the heuristics and unified framework", "sec_num": "5.4.1" }, { "text": "In order to examine the performance of our methods of identifying NE abbreviations, Experiments 4, 5 and 6 were conducted. Experiment 4 examined the effectiveness of modeling the abbreviations of personal names. Experiment 5 incorporated modeling of the abbreviations of location names based on Experiment 4, and Experiment 6 integrated modeling of the abbreviations of organization names based on Experiment 5. The results are shown in Table 10 . It can be seen that the recall of PER, LOC and ORG showed distinct improvement. For example, the recalls increased from 83.66%, 78.65%, 47.68% to 89.31%, 84.91%, 59.75%, respectively. However, we also find that the precision of PER and LOC decreased a little (PER: from 81.24% to 79.78%; LOC: from 86.89% to 86.02%). The reason was that the precision of identifying NE abbreviations was lower than that of identifying whole NE names in general. It is difficult to decide whether a Chinese character is an NE, a single Chinese character, or a part of an ordinary word. For example, the Chinese character \" \" can be an abbreviation of LOC ( \u56fd 'China'), a single Chinese character, or a part of a word (e.g., \u95f4 'in the middle of'). Although the precisions decreased a little, on the whole, we can conclude that the performance of NE identification improved after the models of NE abbreviations were constructed.", "cite_spans": [], "ref_spans": [ { "start": 437, "end": 446, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiments 4, 5 and 6: Performance achieved when modeling abbreviations of personal, location and organization names", "sec_num": "5.4.2" }, { "text": "In order to compare the performance of identifying whole NE names with that of identifying NE abbreviations in more detail, we show results in Table 11 . We can observe that the performance (precision and recall) of identifying NE abbreviations was about 10% lower than that of identifying whole NE names, in general. From these two figures, we can see that:", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 151, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiment 7: Comparing the performance of identifying whole NEs and NE abbreviations", "sec_num": "5.4.3" }, { "text": "(1) the results of the baseline class-based LM are better than those of NLPWin; (2) the distinct improvement was achieved by employing heuristic information; (3) the precision and recall rates improved when we adopted the unified framework; (4) modeling for NE abbreviations distinctly improved the recall of all NEs (as shown in Figure 5 ) with only a trivial decrease in precision.", "cite_spans": [], "ref_spans": [ { "start": 330, "end": 338, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Experiment 7: Comparing the performance of identifying whole NEs and NE abbreviations", "sec_num": "5.4.3" }, { "text": "We classify the errors of the system into two types: Error 1 (a boundary error) and Error 2 (a class tag error) as shown in Figure 6 . The distribution of these two kinds of errors is shown in Table 12 . From Table 12 , we observe that boundary errors accounted for a large percentage of these two kinds of errors in Chinese NE identification. The errors of three kinds of NEs will be further shown in Sections 5.5.1, 5.5.2, and 5.5.3. For some errors, the solutions are given. We also indicate some cases that could not be perfectly handled in our method.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 132, "text": "Figure 6", "ref_id": null }, { "start": 193, "end": 201, "text": "Table 12", "ref_id": "TABREF0" }, { "start": 209, "end": 217, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.5" }, { "text": "The major PER 12 errors are shown in Table 13 :", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 45, "text": "Table 13", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "PER Errors", "sec_num": "5.5.1" }, { "text": "We will try to deal with some of above errors in our future work. Case (b) can be handled 12 PER LOC ORG Michigan by adopting a nested model; Case (c) can be dealt with by constructing a model of Japanese names. Cases (a), (d), and (e) can only be partially dealt with by refining the contextual model in our framework. However, our current method does not provide a sound solution for Case (d), namely, aliases of personal names.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PER Errors", "sec_num": "5.5.1" }, { "text": "LOC errors are shown in Table 14 .", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 32, "text": "Table 14", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "LOC Errors", "sec_num": "5.5.2" }, { "text": "One reason for the errors in Case (a) was that there were noises of this kind in the training data. As for Case (b), the model of the abbreviations of location name can identify many abbreviations. However, there were a few errors of identification because location abbreviations may be common words, e.g., \" \".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOC Errors", "sec_num": "5.5.2" }, { "text": "ORG errors are shown in Table 15 . Case (a) can be partly handled by refining the model of organization names. However, our system may fail to handle an instance like \" \u534e \u95e8 \" because it does not have enough information to detect the right boundary of the organization name. In addition, our class-based LM cannot successfully deal with Case (b) at present.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 32, "text": "Table 15", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "ORG Errors", "sec_num": "5.5.3" }, { "text": "In addition, although the language model method was adopted to identify the abbreviations of organization names, there were still some abbreviations of organization names that were not identified. One reason is that some abbreviations are not covered in the above patterns. The other reason is that the score function in Equation 9 is just an empirical formula and needs to be improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ORG Errors", "sec_num": "5.5.3" }, { "text": "We also evaluated our system (nested NEs were not numbered in this case) using the MET2 test data and compared the performance achieved with that of two public systems 13 (the NTU system and KRDL system). As shown in Table 16 , our system outperformed the NTU system. Our system was also better than the KRDL system for PERs, but the performance for LOCs and ORGs was worse than that of the KRDL system. The possible reasons are: (1) Our NE definitions are slightly different from those of MET2. (2) The model is estimated using a general domain corpus, which is quite different from the domain of MET2 data. (3) An NE dictionary is not utilized in our system. ", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 225, "text": "Table 16", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Evaluation with MET2 Data", "sec_num": "5.6" }, { "text": "We have presented a method of Chinese NE identification using a class-based language model, which consists of two sub-models: a set of entity models and a contextual model. Our method provides a unified framework, in which it is easy to incorporate Chinese word segmentation and NE identification. As has been demonstrated, our unified method performs better than traditional methods. We have also presented our method of identifying NE abbreviations. The language model method has several advantages over rule-based ones. First, it can integrate the identification of NE abbreviations into the class-based LM. Secondly, it reduces the labor of developing rules for NE abbreviations. In addition, we have also employed a two-level ORG model so that the nested entities in organization names can be identified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions & Future work", "sec_num": "6." }, { "text": "The achieved precision rates of PER, LOC, ORG on the test data were 79.78%, 86.02%, and 76.79%, respectively, and the achieved recall rates were 89.29%, 84.87%, and 59.75%, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions & Future work", "sec_num": "6." }, { "text": "There are several possible directions of future research. First, since we use a parser to annotate the training set, parsing errors will be an obstacle to further improvement. Therefore, we need to find an effective way to correct the mistakes and perform necessary automatic correction. Secondly, a more delicate model of ORG will be investigated to characterize the features of all kinds of organizations. Thirdly, the current method only utilizes the features in the currently processed sentence, not the global information in the text. For example, suppose that the same NE (e.g., \u6765) occurs twice in different sentences in a document. It is possible that the NE will be tagged PER in one sentence but not recognized in the other. This raises a question as to how to construct a model of global information. Furthermore, the model of organization name abbreviations also needs to be improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions & Future work", "sec_num": "6." }, { "text": "In the step of identifying PERs and LOCs, the classes LOCW and LABB are modeled in context ; in the step of identifying ORGs, the two classes are united into one class, LOC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In fact, NLPWin has many output settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "One difference between our guidelines and those of MET is that nested persons and location names in organizations are tagged according to our guidelines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The statistics reported here are slightly different from those reported earlier(Sun, Gao, et al., 2002) because we checked the accuracy and consistency of the test data again for our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at http://www.itl.nist.gov/iad/894.02/related_projects/muc/proceedings/ne_chinese_score_report.html.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Chang-ning Huang, Andi Wu, Hang Li and other colleagues at Microsoft Research for their help. We also thank Lei Zhang for his help. In addition, we thank the three anonymous reviewers for their useful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "MITRE: Description of the Alembic System Used for MUC-6", "authors": [ { "first": "J", "middle": [], "last": "Aberdeen", "suffix": "" }, { "first": "D", "middle": [], "last": "Day", "suffix": "" }, { "first": "L", "middle": [], "last": "Hirschman", "suffix": "" }, { "first": "P", "middle": [], "last": "Robinson", "suffix": "" }, { "first": "M", "middle": [], "last": "Vilain", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Sixth Message Understanding Conference", "volume": "", "issue": "", "pages": "141--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aberdeen J., Day D., Hirschman L., Robinson P. and Vilain M., \"MITRE: Description of the Alembic System Used for MUC-6\", Proceedings of the Sixth Message Understanding Conference, pp. 141-155, 1995.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The Festival Speech synthesis system", "authors": [ { "first": "A", "middle": [], "last": "Black", "suffix": "" }, { "first": "P", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "R", "middle": [], "last": "Caley", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Black A., Taylor P. and Caley R., The Festival Speech synthesis system. http://www.cstr.ed.ac.uk/projects/festival/ , 1998.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Facile: Description of the NE System Used For MUC-7", "authors": [ { "first": "W", "middle": [ "J" ], "last": "Black", "suffix": "" }, { "first": "F", "middle": [], "last": "Rinaldi", "suffix": "" }, { "first": "D", "middle": [], "last": "Mowatt", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 7th Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Black W.J., Rinaldi F. and Mowatt D., \"Facile: Description of the NE System Used For MUC-7\", Proceedings of 7th Message Understanding Conference, 1998.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Language Independent Named Entity Classification by modified Transformation-based Learning and by Decision Tree Induction", "authors": [ { "first": "W", "middle": [ "J" ], "last": "Black", "suffix": "" }, { "first": "A", "middle": [], "last": "Vasilakopoulos", "suffix": "" } ], "year": 2002, "venue": "The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Black W.J. and Vasilakopoulos A., \"Language Independent Named Entity Classification by modified Transformation-based Learning and by Decision Tree Induction\", The 6th Conference on Natural Language Learning, 2002.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A Maximum Entropy Approach to Named Entity Recognition", "authors": [ { "first": "", "middle": [ "A" ], "last": "Borthwick", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Borthwick. A., \"A Maximum Entropy Approach to Named Entity Recognition\", PhD Dissertation, 1999.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An algorithm that learns what's in a name", "authors": [ { "first": "D", "middle": [], "last": "Bikel", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwarta", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 1999, "venue": "Machine Learning Journal Special Issue on Natural Language Learning", "volume": "34", "issue": "", "pages": "211--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bikel D., Schwarta R. and Weischedel R., \"An algorithm that learns what's in a name\", Machine Learning Journal Special Issue on Natural Language Learning, 34, pp. 211-231, 1999.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Class-based n-gram models of natural language", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Dellapietra", "suffix": "" }, { "first": "P", "middle": [ "V" ], "last": "Lai", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Mercer", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "", "suffix": "" } ], "year": 1992, "venue": "Computational Linguistics", "volume": "18", "issue": "4", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown P. F., DellaPietra V. J., deSouza P. V., Lai J. C., and Mercer R. L., \"Class-based n-gram models of natural language\", Computational Linguistics, 18(4): 467-479, 1992.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Transform-based Error-Driven Learning and Natural Language Processing: A Case Study in Part-of-speech Tagging", "authors": [ { "first": "E", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "4", "pages": "543--565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brill E., \"Transform-based Error-Driven Learning and Natural Language Processing: A Case Study in Part-of-speech Tagging\", Computational Linguistics, 21(4): 543-565, 1995.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Named Entity Extraction using AdaBoost", "authors": [ { "first": "X", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "L", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "L", "middle": [], "last": "Padr\u00f3", "suffix": "" } ], "year": 2002, "venue": "The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carreras X., M\u00e0rquez L. and Padr\u00f3 L., \"Named Entity Extraction using AdaBoost\", The 6th Conference on Natural Language Learning, 2002.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Large-corpus-based methods for Chinese personal name recognition", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" }, { "first": "S", "middle": [ "D" ], "last": "Chen", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "X", "middle": [ "Z" ], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [ "J" ], "last": "Ke", "suffix": "" } ], "year": 1992, "venue": "Journal of Chinese Information Processing", "volume": "6", "issue": "3", "pages": "7--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang J.S., Chen S. D., Zheng Y., Liu X. Z., and Ke S. J., \"Large-corpus-based methods for Chinese personal name recognition\", Journal of Chinese Information Processing, 6(3): 7-15, 1992.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Description of the NTU System Used for MET2", "authors": [ { "first": "H", "middle": [ "H" ], "last": "Chen", "suffix": "" }, { "first": "Y", "middle": [ "W" ], "last": "Ding", "suffix": "" }, { "first": "S", "middle": [ "C" ], "last": "Tsai", "suffix": "" }, { "first": "G", "middle": [ "W" ], "last": "Bian", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 7th Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen H.H., Ding Y.W., Tsai S.C. and Bian G.W., \"Description of the NTU System Used for MET2\", Proceedings of 7th Message Understanding Conference, 1998.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Identification of Organization Names in Chinese Texts", "authors": [ { "first": "H", "middle": [ "H" ], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Lee", "suffix": "" } ], "year": 1994, "venue": "Communication of Chinese and Oriental Languages Information Processing Society", "volume": "4", "issue": "", "pages": "131--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen H.H., Lee J.C., \"The Identification of Organization Names in Chinese Texts\", Communication of Chinese and Oriental Languages Information Processing Society, 4(2): pp. 131-142, 1994 (in Chinese).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An empirical study of smoothing techniques for language modeling", "authors": [ { "first": "S", "middle": [ "F" ], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1999, "venue": "Computer Speech and Language", "volume": "13", "issue": "", "pages": "359--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, S. F., and Goodman, J., \"An empirical study of smoothing techniques for language modeling\". Computer Speech and Language, 13: 359-394, October 1999.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The automatic identification and recovery of Chinese acronyms", "authors": [ { "first": "Si-Qing", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1996, "venue": "Studies in the Linguistics Sciences", "volume": "26", "issue": "", "pages": "61--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Si-Qing., \"The automatic identification and recovery of Chinese acronyms\", Studies in the Linguistics Sciences, 26(1/2): 61-82. 1996.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "MUC-7 Named Entity Task Definition Version 3.5\". Available by from ftp.muc.saic.com/pub/MUC/MUC7-guidelines", "authors": [ { "first": "", "middle": [ "N" ], "last": "Chinchor", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinchor. N., \"MUC-7 Named Entity Task Definition Version 3.5\". Available by from ftp.muc.saic.com/pub/MUC/MUC7-guidelines, 1997.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unsupervised models for named entity classification", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Y", "middle": [], "last": "Singer", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins M., Singer Y., \"Unsupervised models for named entity classification\", Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 1999.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Ranking Algorithms for Named-Entity Extraction: Boosting and the Voted Perceptron", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "489--496", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins M., \"Ranking Algorithms for Named-Entity Extraction: Boosting and the Voted Perceptron\", Proceedings of the 40th Annual Meeting of the ACL, Philadelphia, pp. 489-496, July 2002.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Named Entity Recognition as a House of Cards: Classifier Stacking", "authors": [ { "first": "R", "middle": [], "last": "Florian", "suffix": "" } ], "year": 2002, "venue": "The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Florian R., \"Named Entity Recognition as a House of Cards: Classifier Stacking\", The 6th Conference on Natural Language Learning, 2002.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Oki Electric Industry: Description of the Oki System as Used for MET-2", "authors": [ { "first": "J", "middle": [], "last": "Fukumoto", "suffix": "" }, { "first": "M", "middle": [], "last": "Shimohata", "suffix": "" }, { "first": "F", "middle": [], "last": "Masui", "suffix": "" }, { "first": "M", "middle": [], "last": "Sasaki", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 7th Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fukumoto J., Shimohata M., Masui F. and Sasaki M., \"Oki Electric Industry: Description of the Oki System as Used for MET-2\", Proceedings of 7th Message Understanding Conference, 1998.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The use of clustering techniques for language modelingapplication to Asian languages", "authors": [ { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "J", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "J", "middle": [], "last": "Miao", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "6", "issue": "", "pages": "27--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gao J., Goodman J., Miao J., \"The use of clustering techniques for language modeling - application to Asian languages\", Computational Linguistics and Chinese Language Processing, Vol. 6, No. 1, pp 27-60.2001.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Information extraction from broadcast news", "authors": [ { "first": "Y", "middle": [], "last": "Gotoh", "suffix": "" }, { "first": "S", "middle": [], "last": "Renals", "suffix": "" } ], "year": 2000, "venue": "Philosophical Transactions of the Royal Society of London, series A: Mathematical, Physical and Engineering Sciences", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gotoh Y., Renals S., \"Information extraction from broadcast news\", Philosophical Transactions of the Royal Society of London, series A: Mathematical, Physical and Engineering Sciences, 2000.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The NYU System for MUC-6 or Where's the Syntax?", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the MUC-6 workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman R., \"The NYU System for MUC-6 or Where's the Syntax?\", Proceedings of the MUC-6 workshop, Washington. November 1995.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Description of the LaSIE-II System as Used for MUC-7", "authors": [ { "first": "K", "middle": [], "last": "Humphreys", "suffix": "" }, { "first": "R", "middle": [], "last": "Gaizauskas", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 7th Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Humphreys K., Gaizauskas R., et al., Univ. of Sheffield: \"Description of the LaSIE-II System as Used for MUC-7\", Proceedings of 7th Message Understanding Conference, 1998.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Named Entity Extraction with Conditional Markov Models and Classifiers", "authors": [ { "first": "M", "middle": [], "last": "Jansche", "suffix": "" } ], "year": 2002, "venue": "The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jansche M., \"Named Entity Extraction with Conditional Markov Models and Classifiers\", The 6th Conference on Natural Language Learning, 2002.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "IsoQuest Inc.: Description of the NetOwlTM Extractor System as Used for MUC-7", "authors": [ { "first": "G", "middle": [ "R" ], "last": "Krupka", "suffix": "" }, { "first": "K", "middle": [], "last": "Hausman", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 7th Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krupka G. R., Hausman K.. \"IsoQuest Inc.: Description of the NetOwlTM Extractor System as Used for MUC-7\", Proceedings of 7th Message Understanding Conference, 1998.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A Cache-Based Natural Language Model for Speech Recognition", "authors": [ { "first": "R", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "", "middle": [ "R D" ], "last": "Mori", "suffix": "" } ], "year": 1990, "venue": "IEEE Transaction on Pattern Analysis and Machine Intelligence", "volume": "12", "issue": "6", "pages": "570--583", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuhn R., Mori. R.D. \"A Cache-Based Natural Language Model for Speech Recognition\", IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol.12. No. 6. pp 570-583, 1990.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Internal and External Evidence in the Identification and Semantic Categorization of Proper Names", "authors": [ { "first": "D", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 1996, "venue": "Corpus Processing for Lexical Acquisition", "volume": "", "issue": "", "pages": "21--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "McDonald D., \"Internal and External Evidence in the Identification and Semantic Categorization of Proper Names\", Corpus Processing for Lexical Acquisition. pp. 21-39. MIT Press. Cambridge, MA. 1996.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Entity Extraction without Language-specific Resources", "authors": [ { "first": "P", "middle": [], "last": "Mcnamee", "suffix": "" }, { "first": "J", "middle": [], "last": "Mayfield", "suffix": "" } ], "year": 2002, "venue": "The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McNamee P. and Mayfield J., \"Entity Extraction without Language-specific Resources\", The 6th Conference on Natural Language Learning, 2002.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Description of the LTG System Used for MUC-7", "authors": [ { "first": "A", "middle": [], "last": "Mikheev", "suffix": "" }, { "first": "C", "middle": [], "last": "Grover", "suffix": "" }, { "first": "M", "middle": [], "last": "Moens", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 7th Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikheev A., Grover C. and Moens M., \"Description of the LTG System Used for MUC-7\", Proceedings of 7th Message Understanding Conference, 1998.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "BBN: Description of the SIFT System as Used for MUC-7", "authors": [ { "first": "S", "middle": [], "last": "Miller", "suffix": "" }, { "first": "M", "middle": [], "last": "Crystal", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 7th Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller S., Crystal M., et al., \"BBN: Description of the SIFT System as Used for MUC-7\", Proceedings of 7th Message Understanding Conference, 1998.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A Statistical Profile of the Named Entity Task", "authors": [ { "first": "D", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Day", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Palmer D., Day D.S., \"A Statistical Profile of the Named Entity Task\", Proceedings of the Fifth Conference on Applied Natural Language Processing, Washington, D.C., March 31-April 3, 1997.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Memory-Based Named Entity Recognition", "authors": [ { "first": "E", "middle": [ "T K" ], "last": "Sang", "suffix": "" } ], "year": 2002, "venue": "The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sang E.T.K., \"Memory-Based Named Entity Recognition\", The 6th Conference on Natural Language Learning. 2002.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A decision tree method for finding and classifying names in Japanese texts", "authors": [ { "first": "S", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "H", "middle": [], "last": "Shinou", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Sixth Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sekine S., Grishman R. and Shinou H., \"A decision tree method for finding and classifying names in Japanese texts\", Proceedings of the Sixth Workshop on Very Large Corpora, Montreal, Canada, 1998.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Normalization of non-standard words", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "A", "middle": [], "last": "Black", "suffix": "" }, { "first": "S", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2001, "venue": "Computer Speech and Language", "volume": "15", "issue": "3", "pages": "287--333", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat R., Black A., Chen S., et al., \"Normalization of non-standard words\", Computer Speech and Language, 15(3): 287-333, 2001.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Corpus-Based Methods in Chinese Morphology and Phonology", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Chilin", "middle": [], "last": "Shih", "suffix": "" } ], "year": 2001, "venue": "LSA Institute", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat R., Chilin Shih. \"Corpus-Based Methods in Chinese Morphology and Phonology\", 2001 LSA Institute, Santa Barbara.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Chinese Named Entity Identification Using Class-based Language Model", "authors": [ { "first": "J", "middle": [], "last": "Sun", "suffix": "" }, { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "L", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "C", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2002, "venue": "Proceeding of the 19th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "967--973", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun J., Gao J., Zhang L., Zhou M., Huang C., \"Chinese Named Entity Identification Using Class-based Language Model\". Proceeding of the 19th International Conference on Computational Linguistics, pp.967-973, 2002.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Identifying Chinese Names in Unrestricted Texts", "authors": [ { "first": "M", "middle": [ "S" ], "last": "Sun", "suffix": "" }, { "first": "C", "middle": [ "N" ], "last": "Huang", "suffix": "" }, { "first": "H", "middle": [ "Y" ], "last": "Gao", "suffix": "" }, { "first": "J", "middle": [], "last": "Fang", "suffix": "" } ], "year": 1994, "venue": "Communications of COLIPS", "volume": "4", "issue": "2", "pages": "113--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun M.S., Huang C.N., Gao H.Y., Fang J., \"Identifying Chinese Names in Unrestricted Texts\", Communications of COLIPS, Vol 4, No. 2, pp. 113-122, 1994 (in Chinese)", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Use of Support Vector Machines in Extended Named Entity Recognition", "authors": [ { "first": "K", "middle": [], "last": "Takeuchi", "suffix": "" }, { "first": "N", "middle": [], "last": "Collier", "suffix": "" } ], "year": 2002, "venue": "The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takeuchi K., Collier N., \"Use of Support Vector Machines in Extended Named Entity Recognition\", The 6th Conference on Natural Language Learning, 2002.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A Hybrid Approach to the Identification and Expansion of Abbreviations", "authors": [ { "first": "J", "middle": [], "last": "Toole", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toole J., \"A Hybrid Approach to the Identification and Expansion of Abbreviations\", RIAO'2000 Proceedings, 2000", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Learning with Multiple Stacking for Named Entity Recognition", "authors": [ { "first": "K", "middle": [], "last": "Tsukamoto", "suffix": "" }, { "first": "Y", "middle": [], "last": "Mitsuishi", "suffix": "" }, { "first": "M", "middle": [], "last": "Sassano", "suffix": "" } ], "year": 2002, "venue": "The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsukamoto K., Mitsuishi Y., Sassano M., \"Learning with Multiple Stacking for Named Entity Recognition\", The 6th Conference on Natural Language Learning. 2002.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm", "authors": [ { "first": "A", "middle": [ "J" ], "last": "Viterbi", "suffix": "" } ], "year": 1967, "venue": "IEEE Transactions on Information Theory", "volume": "", "issue": "13", "pages": "260--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "Viterbi A. J., \"Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm\", IEEE Transactions on Information Theory, IT(13). pp. 260-269, April 1967.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Boosting for Named Entity Recognition", "authors": [ { "first": "D", "middle": [ "K" ], "last": "Wu", "suffix": "" }, { "first": "G", "middle": [], "last": "Ngai", "suffix": "" } ], "year": 2002, "venue": "The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu D.K., Ngai G., et al., \"Boosting for Named Entity Recognition\", The 6th Conference on Natural Language Learning, 2002.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Description of the Kent Ridge Digital Labs System Used for MUC-7", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Yu", "suffix": "" }, { "first": "S", "middle": [ "H" ], "last": "Bai", "suffix": "" }, { "first": "P", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 7th Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu S.H., Bai S.H. and Wu P., \"Description of the Kent Ridge Digital Labs System Used for MUC-7\", Proceedings of 7th Message Understanding Conference, 1998.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Study on Chinese Proofreading Oriented Language Modeling", "authors": [ { "first": "L", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang L., \"Study on Chinese Proofreading Oriented Language Modeling\", PhD Dissertation, 2001.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Named Entity Recognition using an HMM-based Chunk Tagger", "authors": [ { "first": "G", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "J", "middle": [], "last": "Su", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 40th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "473--480", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou G. Su J., \"Named Entity Recognition using an HMM-based Chunk Tagger\", Proceedings of the 40th Annual Meeting of the ACL, Philadelphia, pp. 473-480, July 2000.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "type_str": "figure", "text": "The generation of the sequence s 1 s 2 s 3 given the PER3 class.", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "The grammar of PER, LOC and ORG candidates.SN: Chinese surname; GN1: first character of a Chinese given name; GN2: second character of a Chinese given name; FNC: character of a foreign name; CW: Chinese word; LK: location keyword; LABB: abbreviation of a location name; OK: organization keyword; OABB: abbreviation of an organization name.", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "Figure 3.1. State transition in the lattice without the identification of ORG abbreviations.", "num": null }, "FIGREF4": { "uris": null, "type_str": "figure", "text": "Figure 3.2. State transition in the lattice with the identification of ORG abbreviations.", "num": null }, "FIGREF6": { "uris": null, "type_str": "figure", "text": "Precision in different settings. Figure 5. Recall in different settings. 1. Results of NLPWin parsing. 2. Results of the baseline class-based model. 3. Performance of the segmentation-identification separate method. 4. Performance of integrating heuristic information and adopting the unified framework. 5. Performance of modeling for the abbreviations of personal names. 6. Performance of modeling for the abbreviations of location names. 7. Performance of modeling for the abbreviations of organization names", "num": null }, "TABREF0": { "content": "
ClassExplanation/IntuitionExamples
FNforeign names in transliteration'Clinton'
PER1Chinese personal name consisting only\u603b'Premier Zhou'
of a surname
PERPER2Chinese personal name consisting of a surname and a one-character given name'Li Peng'
PER3Chinese personal name consisting of a\u6765 'Zhou Enlai'
surname and a two-character given name
PABBAbbreviation of a personal name\u6765'Enlai'
LOCW 2Whole name of a location'Beijing City'
LABBAbbreviation of a location name\u5173'Sino-Japan relation'
ORGOrganization name\u90ae\u7535 \u5b66
'Beijing University of
Posts&Telecommunications'
PTA personal title in context (-1~1) of PER\u603b'Premier Zhou'
PVSpeech-act verb in context (-2~2) of PER\u603b
'Premier Zhou points out'
LKLocation keyword in a location name
OKOrganization keyword in an organization\u90ae\u7535 \u5b66
name
DTData and time expression200210
NUNumerical expression\u4ebf, 5%
BOSBeginning of a sentence
EOSEnd of a sentence
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF1": { "content": "
Tag ExplanationTag in PERTag in LOCTag in ORG
BBeginning of the NEPBLBOB
EEnd of the NEPELEOE
FFirst character (or word) in the NEPFLFOF
IMedial character (or word) in the NE, neither initial nor finalPILIOI
LLast character (or word) in the NEPLLLOL
SSingle character (or word)PSLSOS
3.2 Given a Chinese character sequenceSn 1 =s1sn, in which NEs are to be identified, the
identification of PERs and LOCs is the problem of find the optimal class sequence
C1m=c1cm(n m \u2264 ) that maximizes the conditional probabilityP(C1 m|S1 n). This
idea can be expressed by Equation (1), which gives the basic form of the class-based LM:
The class-based LM consists of two components: the contextual modelP( 1 m C)and the
entity modelP(S1 n|C1 m). The contextual model estimates the generative probability of a
class. The probabilityP( 1 m C)can be approximated using trigram probability as shown in
Equation (2):
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF5": { "content": "
LBCWLKLE
PBSNGN1GN2PE
LABB
GN1GN2
OBCWOKOE
FNCPER FNCPT
OABB
", "html": null, "type_str": "table", "num": null, "text": "The NLPWin system is a natural language processing system developed by Microsoft Research.4 The PV and PT are not tagged in the training data parsed by NLPWin. They are then labeled using rule-based methods." }, "TABREF6": { "content": "
where\u03bb\u2208[01 ,]is the interpolation weight determined on the development data set. The
probabilityP static(si|si1 \u2212,si\u22122;PER)is estimated from the training data of PER, and
P unicache(si|PER)
(p static (s i |s i-1, s i-2 )) as shown in Equation (8):
P \u2245( \u03bb s| \u00d7 iP PER unicache( abbr s i|) PER)+1 (\u2212\u03bb)\u00d7P static(si|si\u22121,si\u22122;PER)(8)
5 At present, the abbreviations of transliterated personal names are not modeled.
", "html": null, "type_str": "table", "num": null, "text": "abbreviation candidates of personal names: s 1 and s 2 s 3. The corresponding generative probabilities of these two types of candidates given PER abbreviation are computed by linearly interpolating the cache unigram model (p unicache (s i )) and the static entity model" }, "TABREF8": { "content": "
ExamplesCorresponding EnglishParsing resultsCorrect annotations
according to our guidelines
Secretary-General Jiang<PER> \u603b\u4e66\u8bb0</PER><PER> </PER> \u603b\u4e66\u8bb0
Xiao Xu<PER><PER><PER> <PER>
Sichuan Province<LOC></LOC><LOC></LOC>
\u534eXinhua News Agency<LOC> \u534e </LOC><ORG> \u534e </ORG>
\u8054 \u56fdThe United Nations<LOC>\u8054 \u56fd</LOC><ORG>\u8054 \u56fd</ORG>
\u536bMinistry of Sanitation\u536b
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF9": { "content": "
EntityNumber of Word Tokens
Year 1997Year 1998
PER12,4591,863
PersonPER2 PER348,404 126,38446,141 115,057
FN81,88582,474
Locations (whole names)376,126354,317
Abbreviations of Locations21,30417,412
Organizations122,288125,711
Personal Titles67,53759,879
Speech-act Verbs87,60283,930
Location Keywords49,76753,469
Organization Keywords115,447117,423
5.2.2 Test Data
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF10": { "content": "
ID DomainNumber of NE Tokens PER LOC ORGData Size (Byte)
1Army652033019k
2Computer6216013459k
3Culture54967281138k
4Economy154824354108k
5Entertainment665617143104k
6Literature45871513196k
7Nation4501195251101k
8People1134913400116k
9Politics5101147214122k
10 Science1482068160k
11 Sports7331194623114k
Total4928784624421037k
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF11": { "content": "
NEP (%)R (%)
PER61.0575.26
LOC78.1471.57
ORG68.2931.50
Total70.0766.08
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF12": { "content": "
NEExp.1 11P (%)Exp.3Exp.1R (%)Exp.3
PER66.5281.2477.8283.66
LOC88.0886.8977.8078.65
ORG37.1275.9045.5847.58
All Three70.4283.5772.6375.29
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF13": { "content": "
. Results of Experiments 3, 4, 5 and 6.
NEP (%) Exp.3 Exp.4 Exp.5 Exp.6 Exp.3 Exp.4 Exp.5 Exp.6 R (%)
PER81.2479.6479.7779.7883.6689.3189.3189.29
LOC86.8987.0485.7686.0278.6578.6184.9184.87
ORG75.9075.9775.9576.7947.5849.5047.7159.75
All83.5782.9582.5282.5975.2977.0880.3682.27
Three
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF14": { "content": "
NENE Abbreviations P(%) R(%)Whole NEs P(%) R(%)
PER61.7278.2081.4590.18
LOC67.9671.8888.0286.20
ORG78.0365.0576.4658.46
All Three68.6371.2984.2883.53
5.4.4 Summary of Experiments
Figures 4 and 5 give a brief summary of the experiments in different settings.
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF15": { "content": "
Class Tag
Boundary Correct Correct Error E r r o r 2 Figure 6. NE Error E r r o r 1 PER LOC ORG All ThreeError 1 (%) 87.71 96.86 97.73 93.14Error 2 (%) 12.29 3.14 2.27 6.86
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF16": { "content": "
CasesIdentified resultsStandardTransliteration/Translation
a. Personal names that\u5389\u4e3a\u5389\u4e3aLi Youwei
contain content wordGao Feng
b. Location names thatHo Chi Minh City
have nested personal name
c. Japanese namesTengjing
Meizi
d. Aliases of personal\u4e1c\u4e1c\u4e1c\u4e1cDongdong
names\u5a07\u5a07\u5a07\u5a07Jiaojiao
e. Transliterated personal\u8d3e\u8d3eAjax
names and transliterated
location names that cannot
be distinguished
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF17": { "content": "
CasesIdentified resultsStandardTransliteration/Translation
a. Part of a sequence inSuburb of Shenzhen City
LOC and the right context\u8fb9\u8fb9Buji River side
that can be combined into a word\u53bf\u53bfHepu county
b. Some abbreviations,()Japan
which are common content words( \u56fd) ( )China Hongkong
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF18": { "content": "
CasesIdentified resultsStandardTransliteration/Translation
a. Organization\u8054 \u56fd \u7ef4\u961f\u8054 \u56fd \u7ef4\u961fThe UN Peacekeeping Missions
names that
contain other\u8054 \u56fd \u96be\u8054 \u56fd \u96beThe UN Refugee Office
organizaiton names\u534e\u95e8\u534e\u95e8Branch office of the Xinhua News Agency in Macao
b. ORGs thatAugust 1st Team
contain
numbers, dates\u56e2691th Regiment
or English
characters\u7eaaTwentieth Century Fox
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF19": { "content": "
Kent Ridge
NEOur SystemNTU ResultsDigital Labs Results (KRDL)
P (%)R (%)P (%)R (%)P (%)R (%)
PER77.5193.1074916692
LOC86.5287.2069788991
ORG88.7577.2585788988
", "html": null, "type_str": "table", "num": null, "text": "" } } } }