{ "paper_id": "O12-4004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:02:48.599566Z" }, "title": "Strategies of Processing Japanese Names and Character Variants in Traditional Chinese Text", "authors": [ { "first": "Chuan-Jie", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Ocean University", "location": { "addrLine": "No 2, Pei-Ning Road", "postCode": "20224", "settlement": "Keelung", "country": "Taiwan" } }, "email": "cjlin@ntou.edu.tw" }, { "first": "Jia-Cheng", "middle": [], "last": "Zhan", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Ocean University", "location": { "addrLine": "No 2, Pei-Ning Road", "postCode": "20224", "settlement": "Keelung", "country": "Taiwan" } }, "email": "" }, { "first": "Yen-Heng", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Ocean University", "location": { "addrLine": "No 2, Pei-Ning Road", "postCode": "20224", "settlement": "Keelung", "country": "Taiwan" } }, "email": "" }, { "first": "Wei", "middle": [], "last": "Pao", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Ocean University", "location": { "addrLine": "No 2, Pei-Ning Road", "postCode": "20224", "settlement": "Keelung", "country": "Taiwan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposes an approach to identify word candidates that are not Traditional Chinese, including Japanese names (written in Japanese Kanji or Traditional Chinese characters) and word variants, when doing word segmentation on Traditional Chinese text. When handling personal names, a probability model concerning formats of names is introduced. We also propose a method to map Japanese Kanji into the corresponding Traditional Chinese characters. The same method can also be used to detect words written in character variants. After integrating generation rules for various types of special words, as well as their probability models, the F-measure of our word segmentation system rises from 94.16% to 96.06%. Another experiment shows that 83.18% of the 862 Japanese names in a set of 109 human-annotated documents can be successfully detected.", "pdf_parse": { "paper_id": "O12-4004", "_pdf_hash": "", "abstract": [ { "text": "This paper proposes an approach to identify word candidates that are not Traditional Chinese, including Japanese names (written in Japanese Kanji or Traditional Chinese characters) and word variants, when doing word segmentation on Traditional Chinese text. When handling personal names, a probability model concerning formats of names is introduced. We also propose a method to map Japanese Kanji into the corresponding Traditional Chinese characters. The same method can also be used to detect words written in character variants. After integrating generation rules for various types of special words, as well as their probability models, the F-measure of our word segmentation system rises from 94.16% to 96.06%. Another experiment shows that 83.18% of the 862 Japanese names in a set of 109 human-annotated documents can be successfully detected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word segmentation is an indispensable technique in Chinese NLP. Nevertheless, the processing of Japanese names and Chinese word variants has been studied rarely. At the time when Traditional Chinese text was mostly encoded in BIG5, writers often transcribed a Japanese person's name into its equivalent Traditional Chinese characters, such as the name \"\u6edd\u6ca2\u79c0\u660e\" (Hideaki Takizawa) in Japanese becoming \"\u7027\u6fa4\u79c0\u660e\" in Traditional Chinese. After Unicode was widely adopted, we also could see names written in original Japanese Kanji in Traditional Chinese text. Another issue is how different regions may write a character In addition to word segmentation ambiguity, the out-of-vocabulary problem is another important issue. Unknown words include rare words (e.g. \"\u8e89\u552e,\" for sale); technical terms (e.g. \"\u4e09\u805a\u6c30\u80fa,\" Melamine, a chemical compound); newly invented terms (Chien, 1997 ) (e.g. \" \u65b0 \uf9ca \u611f ,\" Swine flu); and named entities, such as personal and location names. NE recognition is an important related technique (Sun et al., 2003) . In recent times, machine learning approaches have been the focus of papers on Chinese segmentation, such as using SVM (Lu, 2007) or CRF (Zhao et al., 2006; Shi & Wang, 2007) .", "cite_spans": [ { "start": 854, "end": 866, "text": "(Chien, 1997", "ref_id": "BIBREF1" }, { "start": 1004, "end": 1022, "text": "(Sun et al., 2003)", "ref_id": "BIBREF7" }, { "start": 1143, "end": 1153, "text": "(Lu, 2007)", "ref_id": "BIBREF4" }, { "start": 1161, "end": 1180, "text": "(Zhao et al., 2006;", "ref_id": "BIBREF9" }, { "start": 1181, "end": 1198, "text": "Shi & Wang, 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There have been fewer studies focused on handling words that are not Traditional Chinese words in Traditional Chinese text. The most relevant work is discussion of the impact of the different Chinese vocabulary used in different areas on word segmentation systems. These experiments have been designed to train a system with a Traditional Chinese corpus but test on a Simplified Chinese test set or to increase the robustness of a system using a lexicon expanded by adding new terms in different areas (Lo, 2008) .", "cite_spans": [ { "start": 502, "end": 512, "text": "(Lo, 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The main problem in this paper is defined as follows. When a word that is not Traditional Chinese appears in a Traditional Chinese document, such as the Japanese name \"\u6edd\u6ca2\u79c0\u660e\" (written in Japanese Kanji) or \"\u7027\u6fa4\u79c0\u660e\" (written in its equivalent Traditional Chinese), word variants (e.g. \"\uf9e8\u9762\" vs. \"\u88cf\u9762\"), and words written in Simplified Chinese, all of these words can be detected and become word segmentation candidates. This paper is constructed as follows. Section 2 introduces the basic architecture of our word segmentation system. Section 3 explains the Chinese and Japanese name processing modules. Section 4 talks about the character-variant clusters with a corresponding Traditional Chinese character. Section 5 delivers the experimental results and discussion, and Section 6 concludes this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Character Variants in Traditional Chinese Text", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strategies of Processing Japanese Names and 89", "sec_num": null }, { "text": "This paper focuses on approaches to handling words that are not Traditional Chinese during word segmentation. We first constructed a basic bigram model word segmentation system. We did not build a complicated system because its purpose is only for observing the effect of applying different handling approaches for words that are not written in Traditional Chinese on the performance of word segmentation. Word candidates were identified by searching the lexicon or applying detection rules for special-type words, such as temporal or numerical expressions. Note that identical word candidates may be proposed by different rules or the lexicon. Moreover, if no candidate of any length can be found at a particular position inside the input sentence, the system automatically adds a one-character candidate at that position. Afterward, the probabilities of all of the possible segmentations are calculated according to a bigram model. The highest probable segmentation is proposed as the result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Segmentation Strategy", "sec_num": "2." }, { "text": "As there are many special type words, it is impossible to collect them all in a lexicon. Hence, we manually designed many detection rules to recognize such words in an input sentence. The special types handled in our system include the following: address, date, time, monetary, percentage, fraction, Internet address (IP, URL, e-mail, etc.), number, string written in foreign language, and Chinese and Japanese personal name. Numerical digits in the detection rules can be full-sized or Chinese numbers (\u4e00,\u4e8c\u2026\u58f9\u8cb3\u2026). Foreign language characters are detected according to the Unicode table; thus, any character sets, such as Korean or Arabic characters, easily can be added into our system. Consequent characters written in the same foreign language are treated as one word, as most languages use the space symbol as the word-segmentation mark.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special-Type Word Candidate Generation Rules", "sec_num": "2.1" }, { "text": "Since the focus of this paper is not on the correctness of these special rules, only personal name detection rules will be explained in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special-Type Word Candidate Generation Rules", "sec_num": "2.1" }, { "text": "After enumerating all possible segmentations, the next step is to calculate their probabilities P(S). There have been many probabilistic models proposed in word segmentation research. Our system is built on Markov's bigram probabilistic model, whose definition is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u220f = \u2212 \u00d7 = = N i i i N w w P w P w w w S P 2 1 1 2 1 ) | ( ) ( ) ... (", "eq_num": "(1)" } ], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": "where P(w i ) is the unigram probability of the word w i and P(w i | w i-1 ) is the probability that w i appears after w i-1 . In order to avoid the underflow problem, the equation is often calculated in its logistic form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": "\u2211 = \u2212 + = = N i i i N w w P w P w w w S P 2 1 1 2 1 ) | ( log ) ( log ) ... ( log (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": "Data sparseness is an apparent problem, i.e. most word bigrams have no probability. Our solution is a unigram-back-off strategy. That is, when a bigram never occurs in a training corpus, its bigram probability P(w i | w i-1 ) is measured by \u03b1P(w i ) instead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": "When determining the probability of a bigram containing special-type words, the probability is calculated by Eq. 3. Suppose that w i belongs to a special type T; the equation is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": ") | ( ) | ( ) | ( ) | ( ) | ( 1 1 1 1 T w P T w P w T P w w P w w P i G i i i i i i \u00d7 \u00d7 = + \u2212 + \u2212 (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": "where P(T | w k ) and P(w k | T) are the special-type bigram probabilities for the type T and a word w k and where P G (w i | T) is the generation probability of w i being in the type T. The generation probabilities are set to 1 for all special types other than the personal names, whose definitions are explained in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": "As the boundaries of some special types, including address, monetary, percentage, fraction, Internet address, number, and foreign language string, are deterministic and unambiguous, their special-type bigram probabilities are all set to be 1, which means that we accept the segmentation directly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": "On the other hand, characters for Chinese numbers often appear as a part of a word, such as \"\u4e00\ufa00\" (\"\u4e00,\" one; \"\u4e00\ufa00,\" all) and \"\u842c\u4e00\" (both characters are numbers but together mean \"if it happens\"). Therefore, the number-type bigram probability is trained from a corpus. Some temporal expressions are unambiguous, such as the date expression \"\u4e2d\u83ef\u6c11\u570b\u4e5d \u5341\u516b\uf98e\uf9d1\u6708\u4e8c\u5341\u4e00\u65e5\" (\"June 21 of the 98 th year of the R.O.C.\"). Their special-type bigram probabilities are set to 1. For ambiguous temporal expressions, such as \"\u4e09\u5341\uf98e\" (meaning \"the 30 th year\" or \"thirty years\"), their special-type bigram probabilities are obtained by training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": "Before training a bigram model, words belonging to special types first are identified by detection rules and replaced by labels representing their types so that special-type bigram probabilities can be measured at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": "Our special-type bigram probability model is very similar to Gao et al. (2003) . Nevertheless, they treat all dictionary words as one class and all types of special words as a second class, while we treat different types as different classes.", "cite_spans": [ { "start": 61, "end": 78, "text": "Gao et al. (2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Bigram Probabilistic Model", "sec_num": "2.2" }, { "text": "When an input sentence is too long or too many possible segmentations can be found (sometimes hundreds of thousands), the computation time becomes intractable. In order to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computation Reduction", "sec_num": "2.3" }, { "text": "Character Variants in Traditional Chinese Text reduce the computation load, we use the beam search algorithm to prune some low probability segmentations. The main idea of the algorithm is described as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strategies of Processing Japanese Names and 91", "sec_num": null }, { "text": "Let N be the number of characters in an input sentence. Declare N priority queues (denoted as record [i] where i = 1~N) to record the top k segmentations with the highest probability scores covering the first i characters. For each word candidate w beginning with the (i+1) th character whose length is b, append the word w with every segmentation stored in record [i] , compute the probability of the new segmentation, and try to insert it into the queue record [i+b] . If the new segmentation has higher probability than any segmentation stored in the queue record [i+b] , the segmentation with the lowest probability in record[i+b] is discarded in order to insert this new segmentation.", "cite_spans": [ { "start": 101, "end": 104, "text": "[i]", "ref_id": null }, { "start": 365, "end": 368, "text": "[i]", "ref_id": null }, { "start": 463, "end": 468, "text": "[i+b]", "ref_id": null }, { "start": 567, "end": 572, "text": "[i+b]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Strategies of Processing Japanese Names and 91", "sec_num": null }, { "text": "At the beginning, all priority queues are empty. Start with the first character in the sentence. Recursively perform the steps described in the previous paragraph until all of the word candidates starting with the N th character have been considered. In the end, the top 1 segmentation stored in record[N] is proposed as the result. The queue size k is set to be 20 in our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strategies of Processing Japanese Names and 91", "sec_num": null }, { "text": "In this section, we focus on how to find Japanese names written in Japanese Kanji that appear in a Traditional Chinese article. The method of identifying Japanese names written in corresponding Traditional Chinese characters is discussed in Section 4. As our approach to recognize Japanese personal names is similar to the one to find Chinese names, our Chinese name identification approach is introduced first.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese and Japanese Name Processing", "sec_num": "3." }, { "text": "A Chinese personal name consists of a surname part and a first name part. A Chinese surname can be one or two syllables (one or two characters) long. In some cases, a person may have two surnames (usually both with one syllable) in his or her name for various reasons. The first name part in a Chinese name is also one or two syllables long. All name formats possibly seen in an article are listed in Table 1 , where \"SN\" denotes \"surname,\" \"FN\" as \"first name,\" and \"char\" is \"character\".", "cite_spans": [], "ref_spans": [ { "start": 401, "end": 408, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Chinese Personal Name Identification", "sec_num": "3.1" }, { "text": "All strings matching these formats are treated as Chinese name candidates, except the format \"1-char FN,\" in order to prevent proposing every single character as a personal name candidate. The combination of two surnames is also restricted to two 1-syllable surnames, because one rarely sees a 2-syllable surname combined with another surname. We need to build probabilistic models for each character being in every part of a name, as well as a probabilistic model for the personal name formats. To recognize a Chinese name, first we have to prepare a complete list of Chinese surnames. We collected surnames from the Wikipedia entries \"\u4e2d\u570b\u59d3\u6c0f\uf99c\u8868\" 1 (List of Chinese Surnames) and \"\u8907\u59d3\" 2 (2-Syllable Surnames), the websites of the Department of Civil Affairs at the Ministry of Interior 3 , \u4e2d\u83ef\u767e\u5bb6\u59d3 4 (GreatChinese), and \u5343\u5bb6\u59d3 5 (Thousand Surnames). 2,471 surnames were collected. As for the first name part, we simply treat all of the Chinese characters as possible first name characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Personal Name Identification", "sec_num": "3.1" }, { "text": "The generation probability model of a word being a Chinese name is defined as Eq. 4,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Personal Name Identification", "sec_num": "3.1" }, { "text": "where \u03c3 is the gender model (male or female), and \u03c0 is a possible format matching the word w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Personal Name Identification", "sec_num": "3.1" }, { "text": "The name format is represented as \u03c0 = 'xxxx,' where 's' denotes a 1-syllable surname, 'dd' a 2-syllable surname, and 'n' a character in a first name. For example, the format \"two SNs+2-char FN\" is represented as \u03c0 = 'ssnn' and the format \"2-char SN+1-char FN\" is represented as \u03c0 = 'ddn'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Personal Name Identification", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") | ( ) | ( max ) | ( , CHname G CHname G S P w P S w P \u03c0 \u03c0 \u03c3 \u03c0 \u03c3 =", "eq_num": "(4)" } ], "section": "Chinese Personal Name Identification", "sec_num": "3.1" }, { "text": "In Eq. 4, the Chinese name generation probability P \u03c3 (w|\u03c0) is the probability of a word w being a Chinese name whose format is \u03c0 and gender is \u03c3. The Chinese name format probability P G (\u03c0 | S CHname ) is the probability of the special type S CHname (Chinese personal names) appearing in an article with a format \u03c0. The methods of building these probabilistic models are introduced in the following paragraphs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Personal Name Identification", "sec_num": "3.1" }, { "text": "When computing the Chinese name generation probability P \u03c3 (w|\u03c0), we borrowed the idea from Chen et al. (1998) , but we assume that the choice of first names is independent of the surname, and the choice of two characters in the first name part is also independent, in order to reduce the complexity. We also assume that the surname is unrelated to the person's gender. is the set of Chinese surnames and FN CH is the set of characters used in a Chinese first name. A more sophisticated model may be applied but is outside the scope of this paper.", "cite_spans": [ { "start": 92, "end": 110, "text": "Chen et al. (1998)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Chinese Personal Name Identification", "sec_num": "3.1" }, { "text": "Format \u03c0 Name Generation Probability P \u03c3 (w|\u03c0) Format Probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Definitions of the Chinese name probabilities for every name format", "sec_num": null }, { "text": "s P G (c 1 |LN CH ) P G (\u03c0='s'|S CHname ) dd P G (c 1 c 2 |LN CH ) P G (\u03c0='dd'|S CHname ) sn P G (c 1 |LN CH )\u00d7P \u03c3 (c 2 |FN CH ) P G (\u03c0='sn'|S CHname ) nn P \u03c3 (c 1 |FN CH )\u00d7P \u03c3 (c 2 |FN CH ) P G (\u03c0='nn'|S CHname ) ddn P G (c 1 c 2 |LN CH )\u00d7P \u03c3 (c 3 |FN CH ) P G (\u03c0='ddn'|S CHname ) snn P G (c 1 |LN CH )\u00d7P \u03c3 (c 2 |FN CH )\u00d7P \u03c3 (c 3 |FN CH ) P G (\u03c0='snn'|S CHname ) ssn P G (c 1 |LN CH )\u00d7P G (c 2 |LN CH )\u00d7P \u03c3 (c 3 |FN CH ) P G (\u03c0='ssn'|S CHname ) ddnn P G (c 1 c 2 |LN CH )\u00d7P \u03c3 (c 3 |FN CH )\u00d7P \u03c3 (c 4 |FN CH ) P G (\u03c0='ddnn'|S CHname ) ssnn P G (c 1 |LN CH )\u00d7P G (c 2 |LN CH )\u00d7P \u03c3 (c 3 |FN CH )\u00d7P \u03c3 (c 4 |FN CH ) P G (\u03c0='ssnn'|S CHname )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Definitions of the Chinese name probabilities for every name format", "sec_num": null }, { "text": "The generation probability models for surnames and first name characters,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Definitions of the Chinese name probabilities for every name format", "sec_num": null }, { "text": "P G (c i |LN CH ), P G (c i c i+1 |LN CH ) and P \u03c3 (c j |FN CH )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Definitions of the Chinese name probabilities for every name format", "sec_num": null }, { "text": ", are trained from a large corpus by maximum likelihood:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Definitions of the Chinese name probabilities for every name format", "sec_num": null }, { "text": "1-char SN: P G (c i |LN CH ) \uff1d count(c i ) / count(names) 2-char SN: P G (c i c i+1 |LN CH ) \uff1d count(c i c i+1 ) / count(names) FN char: P \u03c3 (c j |FN CH ) \uff1d count(c j ) / count(FN chars) of gender \u03c3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Definitions of the Chinese name probabilities for every name format", "sec_num": null }, { "text": "We adopted a list of one million personal names in Taiwan to build the probabilistic models. The list contains 476,269 male names and 503,679 female names. There are only 953 surnames and 4,000 more first name characters seen in the name list. For those unseen surnames and first name characters, we assign them a small probability (10 -1000 , tuned by experiments) to avoid the zero probability problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Definitions of the Chinese name probabilities for every name format", "sec_num": null }, { "text": "The next step is to build the Chinese name format probability P G (\u03c0 | S CHname ). Since it is about the probability of a name format appearing in an article, the distribution is quite different from the ones observed in the list of one million personal names. A person is often mentioned in an article by his or her title, e.g. \"Prof. \uf9f4\" (\"Prof. Lin\") or \"Mr. \u8af8\u845b\" (\"Mr. Zhu-Ge). When referring to a person in a novel or a letter, it is quite natural to give his or her first name instead of his or her full name. Such cases cannot be captured inside the one million personal names list. Therefore, we need another corpus to train this model. Personal names in the Academia Sinica Balanced Corpus (Sinica Corpus hereafter) are marked as proper nouns (POS-tagged as Nb). We extracted all of the proper nouns in the Sinica Corpus that matched any name format and assumed them to be personal names. These names occur in real documents; thus, they can satisfy our need. The precedence of format matching is defined as follows. Every personal name can only be matched to one format.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Definitions of the Chinese name probabilities for every name format", "sec_num": null }, { "text": "1-char word\uff1as > n > not-Chinese-personal-name 2-char word\uff1add > sn > nn > not-Chinese-personal-name 3-char word\uff1addn > snn > ssn > not-Chinese-personal-name 4-char word\uff1addnn > ssnn > not-Chinese-personal-name 5-char word\uff1anot-Chinese-personal-name Nevertheless, for the reason that some common characters are uncommon surnames, it is possible to identify a proper noun of some other type incorrectly as a personal name, such as \"\u4e2d\u8208\u865f\" (\"Zhong Xing Hao,\" a bus company name) where \"\u4e2d\" (\"Zhong\") is also a surname. In order to increase the precision without sacrificing the recall, we used only frequent surnames and first name characters to do the matching. The sets of frequent characters are the ones that dominate 90% of the probabilities in the name generation model, including 64 surnames (\u9673,\uf9f4\u2026\u7a0b), 467 male first name characters (\u6587,\u660e\u2026\u701b), and 293 female first name characters (\u7f8e,\u6dd1\u2026\u5409), together with all of the 2-syllable surnames.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Definitions of the Chinese name probabilities for every name format", "sec_num": null }, { "text": "There are two more formats seen in articles: SN+\"\u59d3\" or SN+\"\u6c0f\", which call a person or a family, respectively, by the surname only. We denote them as \u03c0 = 'p'. After implementing the matching procedure described above, 39,612 of the 92,314 proper nouns in the Sinica Corpus were extracted as personal names. The Chinese name format probabilities are listed in Table 3 . Although there may still be false-alarm personal names in the set, we expect the scale of the corpus is large enough that it can still provide relatively accurate information. The identified personal names in the corpus also can be used to build the bigram models related to the special type S CHname , Chinese personal name. An example is given here to illustrate how the probability of a personal name is determined.", "cite_spans": [], "ref_spans": [ { "start": 358, "end": 365, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Table 2. Definitions of the Chinese name probabilities for every name format", "sec_num": null }, { "text": "The word \"\u5f35\u5fb7\u57f9,\" (\"Michael Te Pei Chang\") matches two name formats, \u03c0 = {'snn', 'ssn'}, since both \"\u5f35\" (\"Chang\") and \"\u5fb7\" (\"Te\") are possible surnames. Genders options are male and female, i.e. \u03c3 = {M, F}. The most probable one is a male name with the format 'snn'. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Definitions of the Chinese name probabilities for every name format", "sec_num": null }, { "text": "When a Japanese name occurs in an article written in Chinese, there are two ways to write the name. In earlier days, when Traditional Chinese was usually encoded in BIG5, a Japanese name normally was written in its corresponding Traditional Chinese characters, such the name \"\u6edd\u6ca2\u79c0\u660e,\" Hideaki Takizawa, a Japanese performer, would be written as \"\u7027\u6fa4\u79c0\u660e\" in Traditional Chinese. Nowadays, many documents are encoded in Unicode, so Japanese Kanji can be directly used in a Traditional Chinese article. Our word segmentation approach wants to identify both cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Japanese Personal Name Identification", "sec_num": "3.2" }, { "text": "The format of a Japanese personal name is SN+FN, just like a Chinese name. Nevertheless, the length of a Japanese surname varies from one to three Kanji characters, as does the length of the first name part. Sometimes, a name is directly written in Katakana or Hiragana with various lengths. The number of Kanji or Kana characters in a Japanese name is strongly correlated to the number of syllables. Due to the lack of related knowledge, we only deal with the names written in all Kanji and leave the cases of names including Kana as a future work, although Kana can be detected easily by Unicode ranges. As the length of Japanese names varies considerably, we only adopt three name formats, SN-only, FN-only, and SN+FN, without regarding the number of characters inside the first name part, as listed in Table 4 . We know that there is no double surname in Japan.", "cite_spans": [], "ref_spans": [ { "start": 806, "end": 813, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Japanese Personal Name Identification", "sec_num": "3.2" }, { "text": "From the experience of Chinese name processing, we know that a list of Japanese surnames and a large collection of Japanese personal names are needed in order to build name generation probability models. Also, we have to find a corpus of Chinese articles containing Japanese names in order to build the format probability model as well as the special-type bigram probability. The probability of a Japanese personal name is defined as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Japanese Personal Name Identification", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") | ( ) | ( max ) | ( JPname G G JPname G S P w P S w P \u03c0 \u03c0 \u03c0 =", "eq_num": "(5)" } ], "section": "Japanese Personal Name Identification", "sec_num": "3.2" }, { "text": "The notations in Eq. 5 are defined as the same as in Eq. 4. One difference is that, because we do not have a large training corpus for different genders, the factor of gender in the name generation probability is omitted. Table 5 lists the definitions of each probability, where m and n are integers between 1 and 3, 'S' denotes the surname part, and 'F' denotes the first name part. Surnames and first names are also assumed to be independent, as are the characters inside a first name part.", "cite_spans": [], "ref_spans": [ { "start": 222, "end": 229, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Japanese Personal Name Identification", "sec_num": "3.2" }, { "text": "Format Name Generation Probability P(w|\u03c0) Format Probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 5. Definitions of the Japanese name probabilities for every format", "sec_num": null }, { "text": "SN P G (c 1\u2026 c m |LN JP ) P G (\u03c0='S'|S JPname ) FN P G (c 1 |FN JP )\u00d7\u2026\u00d7P G (c n |FN JP ) P G (\u03c0='F'|S JPname ) SN+FN P G (c 1\u2026 c m |LN JP )\u00d7P G (c m+1 |FN JP )\u00d7\u2026\u00d7P G (c m+n |FN JP ) P G (\u03c0='SF'|S JPname )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 5. Definitions of the Japanese name probabilities for every format", "sec_num": null }, { "text": "Japanese surnames were collected from a website called \"\u65e5\u672c\u306e\u82d7\u5b57\u4e03\u5343\u5091\" 6 (7,000 Surnames in Japan). This website provides 8,603 Japanese surnames along with their populations, where data came from the 117 million costumers of NTT, a Japanese Telecom company. The population data can be used to measure the distributions of the surnames. Nevertheless, according the Wikipedia entry \"\u65e5\u6587\u59d3\u540d,\" 7 there are more than 140 thousand Japanese surnames, far more than we have collected. No complete list is available so far. Moreover, we still need another data set to train the probabilities of first name characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 5. Definitions of the Japanese name probabilities for every format", "sec_num": null }, { "text": "All of the Japanese Wikipedia entries that deliver biographies of persons were extracted for learning Japanese personal name distributions. In a Wikipedia page, the title of the entry will also be mentioned again in the text and marked in bold type. The surname part is often separated from the first name part by a space, as in the example of the entry \"\u9ad8\u6a4b\uf9cd\u7f8e\u5b50\" (\"Rumiko Takahashi\"), shown in Figure 1 . By detecting such kinds of strings, we can gather many Japanese names in a short time.", "cite_spans": [], "ref_spans": [ { "start": 393, "end": 401, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Table 5. Definitions of the Japanese name probabilities for every format", "sec_num": null }, { "text": "Nevertheless, Chinese celebrities may also become entries in the Japanese Wikipedia, such as \"\u738b\u5efa\u6c11\" (\"Chien-Ming Wang\") or \"\u66fe\u570b\u85e9\" (\"Zeng Guofan\"). We filtered out the names with a known Chinese surname and a first name part less than three characters. After processing the entire Japanese Wikipedia dumped on Jan 24, 2009 by the methods described above, 65,778 different Japanese names were extracted, including 12,907 surnames and 2,320 Table 6 lists the frequencies of these first name Kanji, where the name generation probabilities P G (c j |FN JP ) are listed in the third column and the accumulated probabilities are in the fourth column. Many surnames collected from the Japanese Wikipedia did not appear in the surname list of \"\u65e5\u672c\u306e\u82d7\u5b57\u4e03\u5343\u5091\". The two lists were merged and resulted in a list of 15,702 surnames.", "cite_spans": [], "ref_spans": [ { "start": 436, "end": 443, "text": "Table 6", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Table 5. Definitions of the Japanese name probabilities for every format", "sec_num": null }, { "text": "The population data provided by \"\u65e5\u672c\u306e\u82d7\u5b57\u4e03\u5343\u5091\" or the frequencies in Wikipedia were used to estimate the generation probabilities of the surnames, as listed in Table 7 . Note that surnames from \"\u4f50\u85e4\" to \"\u9ad8\u4e95\uf97c\" come from \"\u65e5\u672c\u306e\u82d7\u5b57\u4e03\u5343\u5091,\" and the surnames after \"\u6589\u85e4\" were collected from Wikipedia. The Japanese name format probability P G (\u03c0 | S JPname ) was also built by detecting Japanese names in the Sinica Corpus, but only on those proper nouns that were not determined to be Chinese names. Moreover, since the Japanese names in the Sinica Corpus are encoded in Traditional Chinese characters, the matching procedure also includes corresponding Kanji-mapping, which will be explained in Section 4.2.", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 163, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Figure 1. The Wikipedia entry page \"\u9ad8\u6a4b\uf9cd\u7f8e\u5b50\"", "sec_num": null }, { "text": "When extracting Japanese names in the Sinica Corpus, only the 437 first name Kanji (\u5b50, \u4e00\u2026\u745e), which cover 90% of the probabilities, are used, along with the whole Japanese surname set. The preference of the formats is SN+FN > SN > FN. Each name matched one format at most. After doing so, 4,849 of the 92,314 proper nouns in the Sinica Corpus were extracted as Japanese names. They were used to build the format probability model (as listed in Table 8 ) as well as the special-type bigram probability for the Japanese name type S JPname . In our experience, however, the format FN-only often suggests too many incorrect candidates and harms the performance of word segmentation. In the end, we elected not to use it. An example is given here to illustrate how the probability of a personal name is determined. The name \"\u6edd\u6ca2\u5149\" matches the Japanese name format in two ways: \"\u6edd\u6ca2\" (\"Takizawa\") as a surname and \"\u5149\" (\"Hikaru\") as a first name, or \"\u6edd\" (\"Taki\") the surname and \"\u6ca2\u5149\"", "cite_spans": [], "ref_spans": [ { "start": 443, "end": 450, "text": "Table 8", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Figure 1. The Wikipedia entry page \"\u9ad8\u6a4b\uf9cd\u7f8e\u5b50\"", "sec_num": null }, { "text": "Character Variants in Traditional Chinese Text (\"Sawahikari\" 8 ) the first name. The highest probability suggests \"\u6edd\u6ca2\" as a surname and \"\u5149\" as a first name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. The Wikipedia entry page \"\u9ad8\u6a4b\uf9cd\u7f8e\u5b50\"", "sec_num": null }, { "text": "Name: \u6edd\u6ca2\u5149 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. The Wikipedia entry page \"\u9ad8\u6a4b\uf9cd\u7f8e\u5b50\"", "sec_num": null }, { "text": "This section discusses three cases where character variants may be used: (1) a Japanese name written in its corresponding Chinese characters (e.g. \"\u6edd\u6ca2\u79c0\u660e\" vs. \"\u7027\u6fa4\u79c0\u660e,\" Hideaki Takizawa); (2) equivalent words in variant forms (e.g. \"\uf9e8\u9762\" vs. \"\u88cf\u9762,\" inside); (3) Simplified Chinese terms (e.g. \"\u9ad4\u80b2\u9928\" vs. \"\u4f53\u80b2\u9986\", the gym) appearing in a Traditional Chinese article. Although the last two cases are not often seen, especially the third case (which could not happen until Unicode was invented), we still propose approaches to handle these cases at the same time for the possibility of building a multilingual environment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Variant Handling", "sec_num": "4." }, { "text": "A mapping table between the character variants is required for handling the three cases introduced in the previous paragraph. For Japanese names, we need a list of Japanese Kanji and their corresponding Chinese characters. For word variants, a list of the equivalent Chinese character set is necessary. The mapping between Simplified Chinese terms and the corresponding Traditional Chinese ones requires mapping between the two character sets, which is more easily acquired because there are many kinds of software providing such a mapping function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping of Character Variants", "sec_num": "4.1" }, { "text": "We do not know of any well-known Japanese-Chinese Kanji mapping tables. To construct one, we adopted the character variant list 9 developed by Prof. Koichi Yasuoka and Motoko Yasuoka in the Institute for Research in Humanities, Kyoto University. There are 8,196 character variant pairs collected in the list. Following the equivalent relationship, we grouped characters in the list into many character-variant clusters. Some examples of character-variant clusters are given here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping of Character Variants", "sec_num": "4.1" }, { "text": "Chuan-Jie Lin et al.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "100", "sec_num": null }, { "text": "\u4e30 \u8c4a \u8c50 \u973b \u974a \u79c7 \u84fa \u8553 \u85dd \u4e79 \u4e7e \u4e81 \u5e72 \u6f27", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "100", "sec_num": null }, { "text": "Note that these variants are equivalent only in some cases. Take the first cluster illustrated above as an example. The character \"\u8c4a\" is Japanese Kanji and \"\u4e30\" is a Simplified Chinese character, and they both correspond to the Traditional Chinese character \"\u8c50\". Nevertheless, \"\u8c4a\" (ritual vessel) and \"\u4e30\" (elegance) are also legal Traditional Chinese characters that have different meanings from the one of \"\u8c50\" (prosperous).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "100", "sec_num": null }, { "text": "In each character-variant cluster, one Traditional Chinese character (if any) is chosen to be the corresponding character. If there is more than one Traditional Chinese character in a cluster, the most frequent one is chosen. The frequencies of characters are provided by the Table of Frequencies of Characters in Frequent Words 10 (\u5e38\u7528\u8a9e\u8a5e\u8abf\u67e5\u5831\u544a\u66f8\u4e4b\u5b57\u983b\u7e3d\u8868) published by the Taiwan Ministry of Education in 1998. Again, considering the first cluster in the examples above, the three characters \"\u4e30,\" \"\u8c4a,\" and \"\u8c50\" are all Traditional Chinese characters. \"\u8c50\" is the most frequent one; hence, it is chosen as the corresponding character of this cluster. By doing so, not only do the Japanese Kanji \"\u8c4a\" and the Simplified Chinese character \"\u4e30\" have a corresponding Traditional Chinese character, but also the infrequent variants \"\u973b\" and \"\u974a\" can have a frequent corresponding character.", "cite_spans": [], "ref_spans": [ { "start": 276, "end": 284, "text": "Table of", "ref_id": null } ], "eq_spans": [], "section": "100", "sec_num": null }, { "text": "There are many issues in variant mapping. First, the Traditional Chinese set is larger than the BIG5 character set. Relatively infrequent Traditional Chinese characters, such as \"\u974a,\" are not seen in the BIG5 set. Since we are looking for the most frequent Traditional Chinese character, this will not become a problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "100", "sec_num": null }, { "text": "Another issue is the time when two variant characters can be regarded as equivalent. As we have mentioned, the character \"\u8c4a\" is equivalent to \"\u8c50\" only when it is used as Japanese Kanji. Its meaning in Traditional Chinese is a ritual vessel in ancient times (cf. Revised Mandarin Chinese Dictionary 11 , \u91cd\u7de8\u570b\u8a9e\u8fad\u5178\u4fee\u8a02\u672c), which is completely different from the current meaning of \"\u8c50\" (prosperous). This would be an interesting future topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "100", "sec_num": null }, { "text": "When extracting Japanese personal names inside the Sinica Corpus (as described in Section 3.2), the mapping between Japanese Kanji and Traditional Chinese characters is necessary. Characters in the tables of Japanese surnames and first name Kanji need to be transformed into Traditional Chinese first.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding Corresponding Chinese Characters for Japanese Kanji", "sec_num": "4.2" }, { "text": "10 http://www.edu.tw/files/site_content/M0001/87news/index.htm 11 http://dict.revised.moe.edu.tw/cgi-bin/newDict/dict.sh?cond=%E0T&pieceLen=50&fld=1&cat=& ukey=1838907571&op=&imgFont=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding Corresponding Chinese Characters for Japanese Kanji", "sec_num": "4.2" }, { "text": "Each Kanji in a Japanese surname was changed into its corresponding Traditional Chinese character found by the method explained in Section 4.1. For example, the surname \"\u6edd\u6ca2\" (Takizawa) was changed into \"\u7027\u6fa4\" and \"\u4e2d\u66fd\u6839\" (Nakasone) was changed into \"\u4e2d \u66fe\u6839\". The newly created surnames were merged into the original Japanese surname table, and they shared the same probabilities with the original Japanese surnames. If at least one Kanji character in a surname did not have a corresponding Traditional Chinese character (e.g. \"\u7551\" in the surname \"\u53e4\u7551,\" Huruhata), no new surname would be created. The first name Kanji table was expanded in a similar way, along with the assignment of the probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strategies of Processing Japanese Names and 101 Character Variants in Traditional Chinese Text", "sec_num": null }, { "text": "Merging a newly created term into the name probability table makes our system capable of identifying various methods of name writing at the same time. Our system can identify the two equivalent names in the sentence \"\u6edd\u6ca2\u8061\u5c31\u662f\u7027\u6fa4\u8070\" (which means, \"\u6edd\u6ca2\u8061 then is \u7027 \u6fa4\u8070\"). We can see that \"\u6edd\u6ca2\" and \"\u7027\u6fa4\" can be found in the Japanese surname table, just as \"\u8061\" and \"\u8070\" are found in the Japanese first name table. Both \"\u6edd\u6ca2\u8061\" and \"\u7027\u6fa4\u8070\" are proposed as word candidates that are Japanese names and share the same probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strategies of Processing Japanese Names and 101 Character Variants in Traditional Chinese Text", "sec_num": null }, { "text": "Following the same idea, if we further expand the correspondent relationship to the Simplified Chinese character set, it will be possible to understand the sentence \"\u6edd\u6ca2\u8061\u548c\u6cf7 \u6cfd\u806a\u90fd\u662f\u7027\u6fa4\u8070\" (\"\u6edd\u6ca2\u8061 and \u6cf7\u6cfd\u806a both are \u7027\u6fa4\u8070\"), where \"\u6edd\u6ca2\u8061\" is in Japanese, \"\u6edd\u6ca2\u8061\" is in Simplified Chinese, and \"\u7027\u6fa4\u8070\" is in Traditional Chinese. This part has not yet been implemented but is quite promising.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strategies of Processing Japanese Names and 101 Character Variants in Traditional Chinese Text", "sec_num": null }, { "text": "In order to identify word variants written either in character variants or in Simplified Chinese, we expanded the dictionary vocabulary by changing the characters in a Traditional Chinese word into their character variants (including Simplified Chinese characters). For example, given a Traditional Chinese word, ABC, each character is searched in the character-variant clusters introduced in Section 4.1. Every character variant found in the character-variant clusters is used to enumerate all possible word variants. Supposing that A', A\", B', and C' are variants of the characters A, B, and C, the following word variants will be enumerated: A'BC, AB'C, ABC', A'B'C, AB'C', A'BC', A'B'C', A\"BC, A\"B'C, A\"BC', and A\"B'C'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Word Variants", "sec_num": "4.3" }, { "text": "The newly enumerated word shares the same probability as its original form. Instead of merging the word variants and attaining a large dictionary, we assigned each group of the word variants a unique ID and indexed the bigram probability table (for word segmentation) by the group IDs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Word Variants", "sec_num": "4.3" }, { "text": "Since the mapping between Simplified Chinese characters and Traditional Chinese characters is not one-to-one, there may be identical word variants enumerated from different words. For example, the Simplified Chinese word variants of \"\u767d\u9762\" (white-faced) vs. \"\u767d\u9eb5\" (white noodles) are both \"\u767d\u9762,\" and the Simplified Chinese word variants \"\u6539\u5236\" (rule changing) and \"\u6539\u88fd\" (producing in a different model) are the same term \"\u6539\u5236,\" too. To determine the final probability of an ambiguous word variant, we experimented on three strategies where the final probability is the maximum, the minimum, or the sum of all of the probabilities of the original words. Section 5.4 reveals the results of this experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Word Variants", "sec_num": "4.3" }, { "text": "The experimental data for word segmentation is the Academia Sinica Balanced Corpus, Version 3.0 12 . The Sinica corpus is designed for language analysis purposes. Words in a sentence are separated by spaces and tagged with their POSs. The documents are written in Modern Mandarin and collected from different domains and topics. There are 316 files containing 743,718 sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Data and Evaluation Metrics", "sec_num": "5.1" }, { "text": "Our evaluation was done by 5-fold cross-validation. The 316 files were divided into 5 sets. Each set was used as the test set iteratively when the other sets were used as the development set to construct the lexicon and train probability models. The number of sentences in each set is given in Table 9 . ", "cite_spans": [], "ref_spans": [ { "start": 294, "end": 301, "text": "Table 9", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Experimental Data and Evaluation Metrics", "sec_num": "5.1" }, { "text": "The BI-score labels are defined as follows. Given a sentence, each character is labeled as B (at the beginning of a word) or I (inside a word) according to the segmentation in the test set or the segmentation generated by the system. The ratio of correct BI labels also reveals the performance of a word segmentation system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Data and Evaluation Metrics", "sec_num": "5.1" }, { "text": "When evaluating using 5-fold cross-validation, we used micro-averaging to calculate the scores. That is, the values of the denominators and the numerators of precision, recall, and BI-score are the sums over the five experiment sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Data and Evaluation Metrics", "sec_num": "5.1" }, { "text": "This section shows the performance of our basic-model word segmentation system. System Sys1a uses only the known-word lexicon with bigram probability model. System Sys1b integrates the special-type word generation rules, including address, date, time, monetary, percentage, fraction, foreign string, and Internet address, as introduced in Section 2.2. The Sys2 systems further integrate the numbers, including Arabic and Chinese numbers. In order to see the impact of directly adopting the boundary of a number candidate, we experimented on two strategies for Sys2, denoted as Sys2a and Sys2b. As shown in Table 10 , Sys1b performs better because of the integration of special-type word generation rules. The maximum-likelihood probability model for numbers is also a better choice. ", "cite_spans": [], "ref_spans": [ { "start": 606, "end": 614, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Word Segmentation Baseline Performance", "sec_num": "5.2" }, { "text": "After integrating the Chinese personal name generation rules, the special-type probability for Chinese names is also employed. The difference between our work and Chen et al. (1998) is the use of Chinese name format probability and allowing personal names without surnames. Three systems were designed to observe the impact.", "cite_spans": [ { "start": 163, "end": 181, "text": "Chen et al. (1998)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on Handling Chinese and Japanese Personal Names", "sec_num": "5.3" }, { "text": "Sys3a: Using the Chinese name special-type probability, but not the format \u03c0 = 'nn' and the format probability Sys3b: Using the Chinese name special-type probability with the format \u03c0 = 'nn' but not the format probability Sys3c: Using the Chinese name special-type probability with the format \u03c0 = 'nn' and the format probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on Handling Chinese and Japanese Personal Names", "sec_num": "5.3" }, { "text": "All Sys3 systems are based on Sys2b. The evaluation results are shown in Table 11 . We can see that all of these methods (using the special-type probability for Chinese name, the name format of FN-only, and the Chinese name format probability) improve the performance. This confirms the success of name formats in personal name recognition and word segmentation. Two systems were designed to observe the effectiveness of the Japanese name special-type probability and the format probability. As the test set is encoded in BIG5, the Japanese name processing is performed under the BIG5 Traditional Chinese character set. Both Sys4 systems are based on Sys3c.", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 81, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiments on Handling Chinese and Japanese Personal Names", "sec_num": "5.3" }, { "text": "Sys4a: Using the Japanese name special-type probability without the format probability Sys4b: Using both the Japanese name special-type probability and the format probability Table 12 illustrates the performance after integrating Japanese name processing. We found that using only the Japanese name special-type probability resulted in a decline of the word segmentation performance, while using both probability models outperformed Sys3c, but not significantly. The reason may be the small number of Japanese names appearing in the Sinica Corpus, as we know that only 4,849 words in the 743,718 sentences were considered to be Japanese names (cf. Section 3.2). The improvement of Japanese name processing did not affect the performance of word segmentation significantly.", "cite_spans": [], "ref_spans": [ { "start": 175, "end": 183, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiments on Handling Chinese and Japanese Personal Names", "sec_num": "5.3" }, { "text": "In order to observe the real performance of Japanese name processing, we designed another experiment. A collection of 109 news articles was prepared, and the Japanese names in it were manually annotated. 862 occurrences of 216 distinct Japanese names were found.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strategies of Processing Japanese Names and 105 Character Variants in Traditional Chinese Text", "sec_num": null }, { "text": "Two kinds of observations were performed. The first one was to verify the ratio of Japanese names being correctly segmented before and after the integration of Japanese name processing. The results are shown in Table 13 , which were obtained by applying Sys3c and Sys4b on the 109 documents. This confirms that integrating Japanese name processing greatly improves the success rate. The second observation is to measure the precision and recall of Japanese name recognition. That is, the ratio of correct ones among the Japanese name candidates proposed by the system (precision) and the ratio of correctly proposed ones among the Japanese names in the test set (recall). The results are listed in Table 14 , where both recall and precision are about 75%, which is fair correctness but still needing improvement. This also shows that Japanese name processing is not an easy problem. ", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 219, "text": "Table 13", "ref_id": "TABREF0" }, { "start": 698, "end": 706, "text": "Table 14", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Strategies of Processing Japanese Names and 105 Character Variants in Traditional Chinese Text", "sec_num": null }, { "text": "This section discusses the performance of handling word variants. Unfortunately, we cannot find a suitable test set that contains annotations of character variants. The documents in the Sinica Corpus are encoded in BIG5, a subset of Traditional Chinese characters. There are only a few character variants appearing in the Sinica Corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Variant Experiments", "sec_num": "5.4" }, { "text": "Two experimental datasets were constructed for the evaluation. The first dataset was a copy of the Sinica Corpus with every character transformed into its Simplified Chinese form (the mapping is unambiguous and can be done by a lot of software). This dataset can be used to verify the ability of Simplified Chinese word handling of a word segmentation system. It can also be used to decide the probabilistic model for homographic variants from different words. The second one was a real corpus written in Simplified Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Variant Experiments", "sec_num": "5.4" }, { "text": "As mentioned in Section 4.3, the character mapping from Simplified Chinese to Traditional Chinese is many-to-one. It is possible that a Simplified Chinese word is related to two or more different Traditional Chinese words. Three systems were designed to determine the unigram or bigram probability for such homographic word variants: Sys5a chose the maximum probability among the corresponding Traditional Chinese words, Sys5b chose the minimum, and Sys5c used the sum of the probabilities. Note that Chinese and Japanese name processing also suffers from this problem if the names are written in Simplified Chinese characters. To focus on word variant handling, the experiments were performed without personal name processing. All Sys5 systems were developed based on Sys2b, a system that has not integrated the name processing module. The evaluation results are listed in Table 15 . We can see that the method of probability determination does not affect the performance as much, which also shows that the system is capable of dealing with Simplified words in Traditional Chinese text. We chose Sys5a, the one with the maximum values, as our final system. Sys5a: Using the maximal probability of the corresponding source words Sys5b: Using the minimal probability of the corresponding source words Sys5c: Using the sum of the probabilities of the corresponding source words constructed from the Sinica Corpus. The experimental results show that the performance is worse, where precision is 86.56%, recall is 81.47%, and F-measure is 83.94%. This is because the documents in the Peking University Test Set came from Mainland China, where the vocabulary is quite different from the one in Taiwan. The lower performance is not surprising. The main purpose of this experiment is to show that our system can do word segmentation on documents written in Simplified Chinese with a certain correctness level.", "cite_spans": [], "ref_spans": [ { "start": 874, "end": 882, "text": "Table 15", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Word Variant Experiments", "sec_num": "5.4" }, { "text": "In this paper, we propose methods to find word candidates that are Japanese personal names (written in either Japanese Kanji or their equivalent Traditional Chinese characters) or word variants when doing word segmentation. Documents are encoded in UTF-8 so that characters in different languages can appear in the same document. Our word segmentation is based on a bigram probabilistic model, and it integrates the generation rules and probability models for different kinds of special types of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "When handling Chinese and Japanese personal names, we propose the idea of the name format probability model and discuss how the model can be built. We also propose a method to find corresponding Traditional Chinese characters for Japanese Kanji so that a Japanese name can be detected whenever it is written in a different language. The experimental results show that the name format probability model does improve the performance, and the mappings between Japanese Kanji and Traditional Chinese characters do help to detect Japanese names more successfully.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "The size of the Japanese surname list in our system, which contains only 15,702 surnames, is far less than the amount of 140 thousand mentioned in Wikipedia. Nevertheless, once a larger Japanese surname list can be found, it can be easily integrated into our system as long as we assign a small probability to those unseen surnames for smoothing. Furthermore, our knowledge in Japanese name processing is still not sufficient. As a future work, a syllable probabilistic model regarding the pronunciation of a name will be studied. The most important of all is to find a large collection of Japanese names for training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "Using the character variant clusters, Chinese words written in any character variants can be successfully detected as word candidates. Although the set of newly enumerated word variants is large, the computational complexity remains the same if denoting word variants by their group ID and using hash tables to do searching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "http://zh.wikipedia.org/wiki/\u4e2d\u570b\u59d3\u6c0f\uf99c\u8868 2 http://zh.wikipedia.org/wiki/\u8907\u59d3 3 http://www.ris.gov.tw/ch4/0940531-2.doc 4 http://www.greatchinese.com/surname/surname.htm 5 http://pjoke.com/showxing.php", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.myj7000.jp-biz.net 7 http://zh.wikipedia.org/wiki/\u65e5\u6587\u59d3\u540d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In fact, \"\u6ca2\u5149\" (\"Sawahikari\") is a Japanese surname and rarely used as a first name. 9 http://kanji.zinbun.kyoto-u.ac.jp/~yasuoka/ftp/CJKtable/UniVariants.Z", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Description of the NTU System Used for MET2", "authors": [ { "first": "H", "middle": [ "H" ], "last": "Chen", "suffix": "" }, { "first": "Y", "middle": [ "W" ], "last": "Ding", "suffix": "" }, { "first": "S", "middle": [ "C" ], "last": "Tsai", "suffix": "" }, { "first": "G", "middle": [ "W" ], "last": "Bian", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 7th Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, H.H., Ding, Y.W., Tsai S.C., & Bian, G.W. (1998). Description of the NTU System Used for MET2. In Proceedings of 7th Message Understanding Conference (MUC-7). Available: http://www.itl.nist.gov/iaui/894.02/related_projects/muc/index.html. Chuan-Jie Lin et al.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "PAT-tree-based keyword extraction for Chinese information retrieval", "authors": [ { "first": "L", "middle": [ "F" ], "last": "Chien", "suffix": "" } ], "year": 1997, "venue": "Proceedings of SIGIR97", "volume": "", "issue": "", "pages": "27--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chien, L.F. (1997). PAT-tree-based keyword extraction for Chinese information retrieval. In Proceedings of SIGIR97, 27-31.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Improved Source-Channel Models for Chinese Word Segmentation", "authors": [ { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "M", "middle": [], "last": "Li", "suffix": "" }, { "first": "C", "middle": [ "N" ], "last": "Huang", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41 st Annual Meeting on Association for Computational Linguistics (ACL 2003", "volume": "", "issue": "", "pages": "272--279", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gao, J., Li, M., & Huang, C.N. (2003). Improved Source-Channel Models for Chinese Word Segmentation. In Proceedings of the 41 st Annual Meeting on Association for Computational Linguistics (ACL 2003), 272-279.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Combining machine learning with linguistic heuristics for Chinese word segmentation", "authors": [ { "first": "X", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the FLAIRS Conference", "volume": "", "issue": "", "pages": "241--246", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lu, X. (2007). Combining machine learning with linguistic heuristics for Chinese word segmentation. In Proceedings of the FLAIRS Conference, 241-246.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "\u4e2d\u6587\u8fad\u5f59\u6b67\u7fa9\u4e4b\u7814\u7a76-\u65b7\u8a5e\u8207\u8a5e\u6027\u6a19\u793a", "authors": [ { "first": ")", "middle": [], "last": "\u5f6d\u8f09\u884d (peng", "suffix": "" }, { "first": "\u5f35\u4fca\u76db (", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1993, "venue": "\u7b2c\uf9d1\u5c46\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u7814\u8a0e\u6703\uf941\u6587\u96c6 (ROCLING-6)", "volume": "", "issue": "", "pages": "173--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u5f6d\u8f09\u884d (Peng) and \u5f35\u4fca\u76db (Chang) (1993). \u4e2d\u6587\u8fad\u5f59\u6b67\u7fa9\u4e4b\u7814\u7a76-\u65b7\u8a5e\u8207\u8a5e\u6027\u6a19\u793a. In \u7b2c\uf9d1\u5c46\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u7814\u8a0e\u6703\uf941\u6587\u96c6 (ROCLING-6), 173-194.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A dual-layer CRFs based joint decoding method for cascaded segmentation and labeling tasks", "authors": [ { "first": "Y", "middle": [], "last": "Shi", "suffix": "" }, { "first": "M", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2007, "venue": "Proceedings of International Joint Conference on Artificial Intelligence (IJCAI '07)", "volume": "", "issue": "", "pages": "1707--1712", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shi, Y. & Wang, M. (2007). A dual-layer CRFs based joint decoding method for cascaded segmentation and labeling tasks. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI '07), 2007, 1707-1712.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A Class-based Language Model Approach to Chinese Named Entity Identification", "authors": [ { "first": "J", "middle": [], "last": "Sun", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Gao", "suffix": "" } ], "year": 2003, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "8", "issue": "", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, J., Zhou, M., & Gao, J.F. (2003). A Class-based Language Model Approach to Chinese Named Entity Identification. In International Journal of Computational Linguistics and Chinese Language Processing, 8(2), 1-28.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Word segmentation in sentence analysis", "authors": [ { "first": "A", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 1998 International Conference on Chinese Information Processing", "volume": "", "issue": "", "pages": "169--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, A. & Jiang, Z. (1998). Word segmentation in sentence analysis. In Proceedings of the 1998 International Conference on Chinese Information Processing, 169-180.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An improved chinese word segmentation system with conditional random field", "authors": [ { "first": "H", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "C", "middle": [ "N" ], "last": "Huang", "suffix": "" }, { "first": "M", "middle": [], "last": "Li", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "162--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao, H., Huang, C.N., & Li, M. (2006). An improved chinese word segmentation system with conditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, 162-165.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "G (\u6edd\u6ca2|LN JP )\u00d7P G (\u5149|FN JP )\u00d7P G (\u03c0='SN'|S JPname )G (\u6edd|LN JP )\u00d7P G (\u6ca2|FN JP )\u00d7P G (\u5149|FN JP )\u00d7P G (\u03c0='SN'|S JPname )) = (-10.70) + (-9.40) + (-5.15) + (-0.076) = -25.326", "type_str": "figure" }, "TABREF0": { "html": null, "type_str": "table", "content": "
FormatCasesExamplesFormatCasesExamples
SN only1-char SNProf. \uf9f4SN+FN1-char SN+1-char FN\u9673\u767b
2-char SNMr. \u8af8\u845b1-char SN+2-char FN\u738b\u5c0f\u660e
FN only1-char FN\u6167Two SNs+1-char FN\u5f35 \uf9e1\u5a25
2-char FN\u570b\u96c4Two SNs+2-char FN\u5f35 \u9673\u7d20\u73e0
2-char SN+1-char FN\u8af8\u845b\uf977
2-char SN+2-char FN\u53f8\u99ac\u4e2d\u539f
", "text": "", "num": null }, "TABREF1": { "html": null, "type_str": "table", "content": "", "text": "lists all of the definitions of the Chinese name generation probabilities for Strategies of Processing Japanese Names and 93 Character Variants in Traditional Chinese Text every format, where LN CH", "num": null }, "TABREF2": { "html": null, "type_str": "table", "content": "
Format Probability CountProb.Format ProbabilityCountProb.
P G (\u03c0='s'|S CHname )5,43113.71%P G (\u03c0='ddn'|S CHname )1260.32%
P G (\u03c0='n'|S CHname )8152.06%P G (\u03c0='snn'|S CHname )19,45449.11%
P G (\u03c0='p'|S CHname )4871.23%P G (\u03c0='ssn'|S CHname )580.15%
P G (\u03c0='dd'|S CHname )460.12%P G (\u03c0='ddnn'|S CHname )240.06%
P G (\u03c0='sn'|S CHname )2,8457.18%P G (\u03c0='ssnn'|S CHname )610.15%
P G (\u03c0='nn'|S CHname ) 10,26525.91%Total39,612
", "text": "", "num": null }, "TABREF3": { "html": null, "type_str": "table", "content": "
Strategies of Processing Japanese Names and95
Character Variants in Traditional Chinese Text
Name: \u5f35\u5fb7\u57f9
\u03c0\u03c3 Probability
snn Mlog (P CHname )) = (-1.26) + (-1.87) + (-2.74) + (-0.31) = -6.18
snn Flog (P CHname )) = (-1.26) + (-2.89) + (-3.27) + (-0.31) = -7.73
ssn Mlog (P CHname )) = (-1.26) + (-6.02) + (-2.74) + (-2.82) = -12.84
ssn Flog (P CHname )) = (-1.26) + (-6.02) + (-3.27) + (-2.82) = -13.37
", "text": "G (\u5f35|LN CH )\u00d7P M (\u5fb7|FN CH )\u00d7P M (\u57f9|FN CH )\u00d7P G (\u03c0='snn'|S G (\u5f35|LN CH )\u00d7P F (\u5fb7|FN CH )\u00d7P F (\u57f9|FN CH )\u00d7P G (\u03c0='snn'|S G (\u5f35|LN CH )\u00d7P G (\u5fb7|LN CH )\u00d7P M (\u57f9|FN CH )\u00d7P G (\u03c0='ssn'|S G (\u5f35|LN CH )\u00d7P G (\u5fb7|LN CH )\u00d7P F (\u57f9|FN CH )\u00d7P G (\u03c0='ssn'|S", "num": null }, "TABREF4": { "html": null, "type_str": "table", "content": "
FormatSNFNSN+FN
Example\u6728\u6751 \u9577\u8c37\u5ddd\uf9e4\u60e0 \u65b0\u4e00\u4f0a\u85e4\u7531\uf90c \u9ad8\u6a4b\uf9cd\u7f8e\u5b50
", "text": "", "num": null }, "TABREF5": { "html": null, "type_str": "table", "content": "", "text": "Strategies of Processing Japanese Names and 97 Character Variants in Traditional Chinese Text first name Kanji.", "num": null }, "TABREF6": { "html": null, "type_str": "table", "content": "
\u5b504,8213.60%3.60%\u4ea8460.03%89.99%
\u4e003,3582.50%6.10%\u745e460.03%90.03%
\u90ce3,2372.41%8.52%\u2026\u2026\u2026\u2026
\u7f8e2,2301.66%10.18%\u891210.00%99.99%
\u6b631,7411.30%11.48%\u711410.00%100.00%
\u2026\u2026\u2026\u2026Totally 2,320 Kanji; total freq = 134,055
", "text": "Kanji Freq P G (c j |FN JP ) Accm Prob. FN Kanji Freq P G (c j |FN JP ) Accm Prob.", "num": null }, "TABREF7": { "html": null, "type_str": "table", "content": "
SNFreq Gen. Prob. P G (c 1\u2026 c m |LN JP )SNFreq Gen. Prob. P G (c 1\u2026 c m |LN JP )
\u4f50\u85e4 19280001.65%\u9ad8\u4e95\uf97c 7606.49\u00d710 -6
\uf9b1\u6728 17070001.46%\u6589\u85e41119.47\u00d710 -7
\u9ad8\u6a4b 14160001.21%\u4e09\u904a\u4ead 1069.05\u00d710 -7
\u7530\u4e2d 13360001.14%\u2026\u2026\u2026
\u6e21\u8fba 11350000.97%\u57ce\u571f18.54\u00d710 -9
\u4f0a\u85e4 10800000.92%\u99d2\u5c3e18.54\u00d710 -9
\u2026\u2026\u2026Totally 15,702 surnames; total = 117,156,792
", "text": "", "num": null }, "TABREF8": { "html": null, "type_str": "table", "content": "
Frequency7181,1203,0114,849
Probability14.90%23.24%62.48%
", "text": "Format Probability P G (\u03c0='S'|S JPname ) P G (\u03c0='F'|S JPname ) P G (\u03c0='SF'|S JPname ) Total", "num": null }, "TABREF9": { "html": null, "type_str": "table", "content": "
File ID Test Set ID No of Files Sentences Known Words Unknown Words
000~065 ASBCset066148,575146,47715675
066~129 ASBCset164149,713146,27515877
130~183 ASBCset254148,870146,63415518
184~244 ASBCset361148,012146,02416128
245~315 ASBCset471148,548146,00416148
The performance of word segmentation was evaluated by the following metrics, precision,
recall, F-measure, and BI score:
precision=system segmented by the segmented being rds words correct wo of number(6)
test segmented in the segmented being rds words correct wo of number precision recall precision recall measure = recall \u00d7 \u00d7 = 2 -F +set(7) (8)
12 http://godel.iis.sinica.edu.tw/CKIP/20corpus.htm
", "text": "", "num": null }, "TABREF10": { "html": null, "type_str": "table", "content": "
SystemRPFBI
Sys1a95.6692.7294.1696.96
Sys1b95.8793.3194.5797.20
Sys2a95.9793.5794.7697.30
Sys2b96.1693.6894.9097.38
", "text": "", "num": null }, "TABREF11": { "html": null, "type_str": "table", "content": "
SystemRPFBI
Sys3a96.3994.9795.6897.90
Sys3b96.4295.4995.9598.05
Sys3c96.5795.5396.0498.10
", "text": "", "num": null }, "TABREF12": { "html": null, "type_str": "table", "content": "
SystemRPFBI
Sys3c96.5795.5396.0498.10
Sys4a96.5495.5496.0498.10
Sys4b96.5695.5696.0698.10
", "text": "", "num": null }, "TABREF13": { "html": null, "type_str": "table", "content": "
Sys3c15417.87%
Sys4b71783.18%
Total862
", "text": "", "num": null }, "TABREF14": { "html": null, "type_str": "table", "content": "
SystemPR
Sys4b74.31% (648/872)75.17% (648/862)
Some examples of correct and incorrect word segmentation results before and after integrating
the Japanese name processing are given here.
Successful examples:
Sys3cSys4bSys3cSys4b
\u5c0f \uf9f4\u606d\u4e8c\u5c0f\uf9f4\u606d\u4e8c\u5927 \u524d \u7814\u4e00\u5927\u524d\u7814\u4e00
\u77f3\u539f\u614e \u592a\u90ce\u77f3\u539f\u614e\u592a\u90ce\u85e5\u5e2b \u4e38 \u535a\u5b50\u85e5\u5e2b\u4e38\u535a\u5b50
Incorrect examples:
Sys3cSys4bSys3cSys4b
\u9ebb\u5e03 \u548c \u6728\u6750\u9ebb\u5e03\u548c \u6728\u6750\u74e6\u65af\u4e95 \u539f\u6709\u74e6\u65af \u4e95\u539f\u6709
\u570b\u5c0f \uf9f4\u4f69\u8431 \uf934\u5e2b \u570b \u5c0f\uf9f4 \u4f69\u8431 \uf934\u5e2b \u5ee3\u5cf6 \u4e9e\u904b \u6642\u5ee3\u5cf6\u4e9e\u904b\u6642
", "text": "", "num": null }, "TABREF15": { "html": null, "type_str": "table", "content": "
SystemRPFBI
Sys5a96.1193.5394.8097.33
Sys5b95.9593.1694.5497.21
Sys5c96.1193.5394.8097.33
", "text": "The second experiment was done on GHAN 1 st Peking University Test Set, a Simplified Chinese word segmentation benchmark. The test set contained 380 sentences. We did not use its development set and lexicon to train our system. Instead, we used Sys5a and the lexicon", "num": null } } } }