{ "paper_id": "O03-4001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:01:27.204027Z" }, "title": "Customizable Segmentation of Morphologically Derived Words in Chinese", "authors": [ { "first": "Andi", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Research Address", "location": { "addrLine": "21062 NE 81 st Street", "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "andiwu@microsoft.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The output of Chinese word segmentation can vary according to different linguistic definitions of words and different engineering requirements, and no single standard can satisfy all linguists and all computer applications. Most of the disagreements in language processing come from the segmentation of morphologically derived words (MDWs). This paper presents a system that can be conveniently customized to meet various user-defined standards in the segmentation of MDWs. In this system, all MDWs contain word trees where the root nodes correspond to maximal words and leaf nodes to minimal words. Each non-terminal node in the tree is associated with a resolution parameter which determines whether its daughters are to be displayed as a single word or separate words. Different outputs of segmentation can then be obtained from the different cuts of the tree, which are specified by the user through the different value combinations of those resolution parameters. We thus have a single system that can be customized to meet different segmentation specifications.", "pdf_parse": { "paper_id": "O03-4001", "_pdf_hash": "", "abstract": [ { "text": "The output of Chinese word segmentation can vary according to different linguistic definitions of words and different engineering requirements, and no single standard can satisfy all linguists and all computer applications. Most of the disagreements in language processing come from the segmentation of morphologically derived words (MDWs). This paper presents a system that can be conveniently customized to meet various user-defined standards in the segmentation of MDWs. In this system, all MDWs contain word trees where the root nodes correspond to maximal words and leaf nodes to minimal words. Each non-terminal node in the tree is associated with a resolution parameter which determines whether its daughters are to be displayed as a single word or separate words. Different outputs of segmentation can then be obtained from the different cuts of the tree, which are specified by the user through the different value combinations of those resolution parameters. We thus have a single system that can be customized to meet different segmentation specifications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A written sentence in Chinese consists of a string of evenly spaced characters with no delimiters between the words 1 . In any word-based Chinese language processing 2 , therefore, segmenting each sentence into words is a prerequisite. However, due to some special linguistic properties of Chinese words, there is not a generally accepted standard that can be used to unambiguously determine \"wordhood\" in every case. 3 While native speakers of Chinese are often able to agree on how to segment a string of characters into words, there are a substantial number of cases where no agreement can be reached [Sproat et al. 1996] . Besides, different natural language processing (NLP) applications may have different requirements that call for different definitions of words and different granularities of word segmentation. This presents a challenging problem for the development of annotated Chinese corpora that are expected to be useful for training multiple types of NLP systems. It is also a challenge to any Chinese word segmentation system that claims to be capable of supporting multiple user applications. In what follows, we will discuss this problem mainly from the viewpoint of NLP and propose a solution that we have implemented and evaluated in an existing Chinese NLP system 4 .", "cite_spans": [ { "start": 418, "end": 419, "text": "3", "ref_id": null }, { "start": 604, "end": 624, "text": "[Sproat et al. 1996]", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In Section 2, we will look at the problem areas where disagreements among different standards are most likely to arise. We will identify the alternatives in each case, discuss the computational motivation behind each segmentation option, and suggest possible solutions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This section can be skipped by readers who are already familiar with Chinese morphology and the associated segmentation problems. Section 3 presents a customizable system where most of the solutions suggested in Section 2 are implemented. The implementation will be described in detail and evaluation results will be presented. We also offer a proposal for the development of linguistic resources that can be customized for different purposes. In Section 4, we conclude that, with the preservation of word-internal structures and a set of resolution parameters, we can have a Chinese system or a single annotated corpus that can be conveniently customized to meet different word segmentation requirements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "How to identify words in Chinese has been a long-standing research topic in Chinese linguistics and Chinese language processing. Many different criteria have been proposed and any serious discussion of this issue will take no less than a book such as [Packard 2000 ].", "cite_spans": [ { "start": 251, "end": 264, "text": "[Packard 2000", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "Among the reasons that make this a hard and intriguing problem are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "\u2022 Chinese orthography has no indication of word boundaries except punctuation marks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "\u2022 The criteria for wordhood can vary depending on whether we are talking about the phonological word, lexical word, morphological word, syntactic word, semantic word, or psychological word [Packard 2000 , Di Sciullo and Williams 1987 , Dai 1992 , Dai 1997 , Duanmu 1997 , Anderson 1992 , Sadock 1991 , Selkirk 1982 .", "cite_spans": [ { "start": 189, "end": 202, "text": "[Packard 2000", "ref_id": "BIBREF10" }, { "start": 203, "end": 233, "text": ", Di Sciullo and Williams 1987", "ref_id": "BIBREF3" }, { "start": 234, "end": 244, "text": ", Dai 1992", "ref_id": "BIBREF2" }, { "start": 245, "end": 255, "text": ", Dai 1997", "ref_id": "BIBREF1" }, { "start": 256, "end": 269, "text": ", Duanmu 1997", "ref_id": "BIBREF4" }, { "start": 270, "end": 285, "text": ", Anderson 1992", "ref_id": null }, { "start": 286, "end": 299, "text": ", Sadock 1991", "ref_id": null }, { "start": 300, "end": 314, "text": ", Selkirk 1982", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "\u2022 Unlike Japanese, Chinese has very little inflectional morphology that can provide clues to word boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "\u2022 Many bound morphemes in Chinese used to be free morphemes and they are still used as free morphemes occasionally. Therefore the distinction between bound morphemes and words can be fuzzy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "\u2022 The character sequence of many Chinese words can be made discontinuous through morphological processes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "\u2022 Word-internal structures look similar to syntactic structures. As a result, there is often confusion between words and phrases [Dai 1992 ].", "cite_spans": [ { "start": 129, "end": 138, "text": "[Dai 1992", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "\u2022 Structural information is not always sufficient for identifying a sequence of characters as a word. Frequency of the sequence, mutual information between the component syllables, and the number of syllables in that sequence also play a role (Summarized in [Sproat 2002] ).", "cite_spans": [ { "start": 258, "end": 271, "text": "[Sproat 2002]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "As a result, native speakers of Chinese often disagree on whether a given character string is a word. As reported in [Sproat et al, 1996] , the rate of agreement among human judges was only 76%. It is not hard to understand, then, why Chinese linguists have had such a hard time defining words. However, we do not have to wait for linguists to reach a consensus before we do segmentation in NLP. In computer applications, we are more concerned with \"segmentation units\" than \"words\". While words are supposed to be well-defined, unambiguous and static linguistic entities, segmentation units are not. In fact, segmentation units are expected to vary from application to application. In information retrieval, for example, the segmentation units are search terms, whose sizes may vary according to specific needs. A system aimed at precision will require \"larger\" units while a system aimed at recall will require \"smaller\" ones.", "cite_spans": [ { "start": 117, "end": 137, "text": "[Sproat et al, 1996]", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "A good Chinese IR system should be flexible with the output of word segmentation so that search terms of different sizes can be generated. In machine translation, the segmentation units are strings that can be mapped onto the words of another language. An MT system should not be committed to a single segmentation, since the granularity of that segmentation may be good for some mappings but not for others. We can do better if a variety of segmentation units are generated so that all possible words are made available as candidates for alignment. In an N-gram language model, the segmentation units are the \"grams\" and their sizes may need to be adjusted against the perplexity of the model or the sparseness of data. In text-to-speech systems, the segmentation units can be prosodic units and the units that are good for IR may not be good for TTS. In short, a segmentation system can be much more useful if it can provide alternative segmentation units. Alternative units provide linguistic information at different levels and each alternative can serve a specific purpose. We will see some concrete examples in the remainder of this section. To facilitate the use of terminology, we will use \"words\" to mean \"segmentation units\" in the rest of this paper. Now where does the variability in segmentation units come from? If we compare the outputs of various word segmentation systems, we will find that they actually have far more similarities than differences. This is mainly due to the fact that the word lists used by different segmenters have a lot in common. The actual differences we observe usually involve words that are not typically listed in the dictionary. These words are more dynamic in nature and are usually formed through productive morphological processes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "It is those morphologically derived words (MDWs hereafter) that are most controversial and most likely to be treated differently in different standards and different systems. This is the main focus of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "The morphological processes we will be looking at have all been discussed extensively in the literature and a brief summary of them can be found in [Sproat 2002 ]. We will not attempt to review the literature here. Instead, we will concentrate on cases where differences in segmentation are likely to arise. Here are the main categories of morphological processes we will go through:", "cite_spans": [ { "start": 148, "end": 160, "text": "[Sproat 2002", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "\u2022 Reduplication", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Areas for Customization", "sec_num": "2." }, { "text": "\u2022 Directional and resultative compounding", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Affixation", "sec_num": null }, { "text": "\u2022 Merging and splitting", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Affixation", "sec_num": null }, { "text": "During the discussion, we will make frequent reference to the following four existing segmentation standards:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Named entities and factoids", "sec_num": null }, { "text": "(1) The segmentation guidelines for the Penn Chinese Treebank [Xia 2000 ] (\"CHTB\" hereafter).", "cite_spans": [ { "start": 62, "end": 71, "text": "[Xia 2000", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "\u2022 Named entities and factoids", "sec_num": null }, { "text": "(2) The guidelines for the Beijing University Institute of Computational Linguistics", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Named entities and factoids", "sec_num": null }, { "text": "Corpus [Yu 1999 ] (\"BU\" hereafter). These guidelines closely follow the GB standard [GB/T 13715-92, 1993] but have some additional specifications.", "cite_spans": [ { "start": 7, "end": 15, "text": "[Yu 1999", "ref_id": "BIBREF21" }, { "start": 84, "end": 105, "text": "[GB/T 13715-92, 1993]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "\u2022 Named entities and factoids", "sec_num": null }, { "text": "(3) The ROCLING standard developed at Academia Sinica in Taiwan. [Huang et al. 1997 , ROCLING 1997 ( \"ROCLING\" hereafter).", "cite_spans": [ { "start": 65, "end": 83, "text": "[Huang et al. 1997", "ref_id": "BIBREF7" }, { "start": 84, "end": 98, "text": ", ROCLING 1997", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "\u2022 Named entities and factoids", "sec_num": null }, { "text": "(4) The standard used in our own system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Named entities and factoids", "sec_num": null }, { "text": "Our segmentation system is developed as an integral part of a Chinese parser where initial word segmentation produces a weighted word lattice. The word lattice contains all the dictionary words plus the MDWs formed by morphological rules. Syntactic parsing takes this word lattice as its input and the final segmentation corresponds to the leaves of the best parse tree 5 . Segmentation ambiguities are resolved in the parsing process and the correct segmentation is the one that enables a successful parse. In cases where parsing fails, we back off to partial parsing and use dynamic programming to assemble a tree that consists of the largest partial trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Named entities and factoids", "sec_num": null }, { "text": "The main patterns of reduplication in Chinese are AA, ABAB, AABB, AXA, AXAY, XAYA, AAB and ABB. Examples of these patterns can be found in Appendix 1. Existing standards do not have much disagreement over the segmentation of AA, AABB, AXAY, XAYA, AAB and ABB. These are all considered single words for the simple reason that, except in the case of AA, breaking them up will result in segments that are not independent words. The problem cases are ABAB and AXA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduplication", "sec_num": "2.1" }, { "text": "A representative example of this is \" \u8ba8\u8bba\u8ba8\u8bba\" (taolun-taolun: discuss-discuss \"have a discussion\"). It is considered a single word in the CHTB and ROCLING standards, but two separate words in the BU standard. According to CHTB and ROCLING, ABAB is just a variation of AA, where the reduplicated word is made of two characters instead of one. Since the meaning of AA (such as \"\u770b\u770b\" (kan-kan: look-look \"take a look\")) or ABAB is not compositional, 6 they should be both considered single words. According to the BU standard, however, \"\u8ba8\u8bba\u8ba8\u8bba\" should be broken up because \"\u8ba8\u8bba\" can be looked up in the dictionary but \"\u8ba8\u8bba\u8ba8\u8bba\" can not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ABAB", "sec_num": "2.1.1" }, { "text": "Different NLP applications can also have different requirements. The one-word segmentation may simplify syntactic analysis but the two-word segmentation might be better for information retrieval or word-based statistical summarization. For pinyin-to-character conversion, adding the reduplicated form to the word list should improve accuracy but may not have the desired effect if the data is too sparse. In machine translation, it will be desirable to have both: the one-word analysis will make it easier for us to learn mappings between, say, \"\u8ba8\u8bba\u8ba8\u8bba\" and \"have a discussion\", whereas the two-word analysis will let us translate \"\u8ba8 \u8bba\" into \"discuss\" in case no mapping is found for \"\u8ba8\u8bba\u8ba8\u8bba\" in the training data. In our system, we treat ABAB as a single word with internal structure, i.e. [\u8ba8\u8bba \u8ba8\u8bba], so that we can have access to both kinds of information. The word also has a \"lemma\" attribute indicating that the \"underlying form\" is \"\u8ba8\u8bba\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ABAB", "sec_num": "2.1.1" }, { "text": "This covers cases like the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AXA", "sec_num": "2.1.2" }, { "text": "\u8bd5\u4e00\u8bd5 shi-yi-shi: try-one-try \"give it a try\" \u8bd5\u4e86\u8bd5 shi-le-shi: try-LE-try 7 \"gave it a try\" \u8bd5\u4e86\u4e00\u8bd5 shi-le-yi-shi: try-LE-one-try \"gave it a try\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AXA", "sec_num": "2.1.2" }, { "text": "Both BU and ROCLING regard those expressions as separate words, while CHTB treats them as single words with internal structures. Our system also analyzes them as single words. To represent the fact that AXA is an instance of A with additional aspectual information, we store two additional attributes in this word: a \"lemma\" attribute that holds the \"underlying form\" of the MDW (e.g. \"\u8bd5\" for \"\u8bd5\u4e86\u8bd5\") and an \"aspect\" attribute whose value(s) record the aspectual information carried by \"\u4e00\" and/or \"\u4e86\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AXA", "sec_num": "2.1.2" }, { "text": "The lemma attribute is in fact assigned in each type of reduplication. This is especially important for AABB, AAB and ABB. In the case of AABB such as \"\u6e05\u6e05\u695a\u695a\" (qing-qingchu-chu \"very clear\"), for instance, we will not get \"\u6e05\u695a\" (qingchu \"clear\") unless we segment it into \"\u6e05 / \u6e05\u695a / \u695a\" which is not acceptable by any standard because of the dangling bound morphemes on the two sides. This problem disappears once we have \"\u6e05\u695a\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AXA", "sec_num": "2.1.2" }, { "text": "represented as the lemma of the whole reduplicated form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AXA", "sec_num": "2.1.2" }, { "text": "Affixation is a very productive morphological process in Chinese. Examples of various derivational processes can be found in Appendix II. As we can see, the morphological rules that combine stems with affixes are almost indistinguishable from the syntactic rules that attach a modifier to a head. The only difference is that the modifier (in the case of prefixation) or the head (in the case of suffixation) is supposed to be a bound morpheme. However, the line between free morphemes and bound morphemes is often hard to draw in Chinese. 8 There are some relatively clear cases, such as \u975e (fei \"non-\") and \u8d85 (chao \"super-\") as prefixes and \u8005 (zhe \"-er\") and \u5b66 (xue \"-ology\") as suffixes, but the distinction is fuzzy in many cases. 7 Function words like \u4e86 have no English translation and therefore will be glossed by the uppercase versions of their pronunciation. 8 Here are a few borderline cases: \u603b\u5de5\u7a0b\u5e08 zong-gongchenshi \"chief engineer\" \u526f\u4e3b\u5e2d fu-zhuxi \"vice-chairman\" \u8db3\u7403\u573a zuqiu-chang \"soccer field\"", "cite_spans": [ { "start": 733, "end": 734, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "\u8b66\u5bdf\u5c40 jingcha-ju \" police station\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "\u7164\u6c14\u7089 meiqi-lu \"gas stove\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "Are they words or phrases?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "Even the agentive suffix \u8005 can act as a free morpheme in cases like \"\u6301\u67aa\u95ef\u5165\u6c11\u5b85\u8005\" (chiqiang-chuang-ru-min-zhai-zhe: carry-gun-break-into-civilian-residence-er \"people who broke into houses with guns\") where \u8005 is the head of a noun phrase modified by a relative clause. To avoid this thorny issue, different segmentation standards resorted to different definitions of affixation. In the CHTB standard, the term \"affixation\" is not explicitly used. Instead, it describes prefixation as JJ+N where JJ is monosyllabic, and suffixation as N+N where the second N is monosyllabic. The ROCLING standard distinguishes between affixes, \"word beginning\" (\u63a5\u5934\u8bcd jietouci) and \"word endings\" (\u63a5\u5c3e\u8bcd jieweici), but they are functionally equivalent in derivational rules. The BU standard tries to distinguish between affixation and modifier-head phrases by restricting affixation to words that end in a pre-specified list of affixes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "In terms of segmentation, all the standards agree that MDWs derived from affixation should be treated as single words. In actual NLP applications, however, we often wish to have access to both the derived word as a whole as well as its components as separate words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "In machine translation, for instance, it might be desirable to have a choice of translating either the whole or the parts: translate the whole if a translation for the whole can be found and back off to the parts otherwise. Take \u70d8\u5e72\u673a (honggan-ji: dry-machine \"dryer\") as an example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "Ideally the whole word should be translated into \"dryer\". However, if our translation knowledge base has no translation for \u70d8\u5e72\u673a but does have translations for \u70d8\u5e72 and \u673a, we should be able to translate it as \"drying machine\" given that the parts are also available. In information retrieval, we may also want to search for the parts if the query term as a whole is not found. For example, we may want to retrieve texts containing \u8b66\u5bdf (jingcha \"police\")", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "when the query term is \u8b66\u5bdf\u5c40 (jingcha-ju:police-bureau, \"police station\") .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "In our system, we treat complex words derived from affixation as single words, just as the other standards do, but we also keep their internal structures. For example, the complex word \u6838\u7269\u7406\u5b66\u5bb6 (he-wuli-xue-jia: nuclear-physics-science-expert \"nuclearphysicist\") is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "represented as [[[\u6838 \u7269\u7406] \u5b66] \u5bb6].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "Each derived word contains such as a sub-tree. The subtree functions as a single leaf node in syntactic analysis but it can be made visible after parsing to become part of the parse tree if necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affixation", "sec_num": "2.2" }, { "text": "There are many kinds of compounding in Chinese. In terms of word segmentation, the most problematic ones are directional compounding and resultative compounding. In directional compounding, a verb is followed by a directional complement, such as \u4e0a (shang, \"up\"), \u4e0b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "(xia \"down\"), \u8fdb\u53bb (jinqu \"into\"), \u51fa\u6765 (chulai \"out\"), which indicates the direction of the action expressed by the verb. In resultative compounding, a verb is followed by a resultative complement which is a verb or adjective that indicates what results from the action of the first verb. In both cases, the verb and the complement can be separated by \u5f97 (de) or \u4e0d (bu) to express the possibility of the verb-complement relationship. Here are some examples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "Directional compounding:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "\u8d70\u8fdb zou-jin: walk-enter \"walk into\" \u8d70\u8fdb\u53bb zou-jinqu: walk-enter \"walk in\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "\u8d70\u5f97\u8fdb\u53bb zou-de-jinqu: walk-DE-enter \"can walk in\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "Resultative compounding:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "\u5e26\u8d70 dai-zou : take-go \"take away\" \u5e26\u5f97\u8d70 dai-de-zou: take-DE-go \"can take away\" \u5e26\u4e0d\u8d70 dai-bu-zou: take-not-go \"cannot take away\" \u770b\u6e05\u695a kan-qingchu: see-clear \"see clearly\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "\u770b\u5f97\u6e05\u695a kan-de-qingchu: see-DE-clear \"can see clearly\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "\u770b\u4e0d\u6e05\u695a kan-bu-qingchu: see-not-clear \"cannot see clearly\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "The segmentation of those compounds depends on many factors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "(1) Type of compounding. Directional compounds are more likely to be treated as single words than resultative compounds. Both CHTB and ROCLING follow this principle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "(2) Word length. Those compounds are more likely to be treated as separate units if their total length is more than 2. CHTB provides internal structures when the compound is longer than 2 characters. ROCLING treats \"\u770b\u6e05\" (kan-qing: see-clear \"see clearly\") as one word but \"\u770b\u6e05\u695a\" (kan-qingcu: see-clear \"see clearly\") as two words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "(3) Frequency. Compounds that are more frequent, either synchronically or diachronically, tend to be treated as one word. Compare \u6253\u7834 (da-po: hit-break \"hit and make it break\") and \u6253\u75db (da-tong: hit-hurt \"hit and make someone hurt\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "These two compounds have exactly the same internal structure and the same word length, but former is more likely to be regarded as a single word than the latter, simply because \u6253\u7834 is more frequent. The BU standard assumes that all the frequent compounds are already in its lexicon. Therefore non-lexicalized compounds are to be broken up into independent words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "(4) Mutual information [Sproat and Shih 1990] . Compounds whose components have strong mutual information between them are usually taken as single words. For example, \u6495\u88c2 (si-lie: tear-split \"tear open\") is not as frequent as \u6495\u574f (si-huai: tearbad \"tear and break\"), but \u6495\u88c2 is lexicalized in the BU dictionary while \u6495\u574f is not.", "cite_spans": [ { "start": 23, "end": 45, "text": "[Sproat and Shih 1990]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "(5) Some resultative verbs are more independent and therefore more likely to stand on their own. Typical examples are \u5b8c (wan \"finish\") and \"\u7ed9\" (gei \"give\") which have some special grammatical functions 9 in addition to being resultative complements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "(6) \"V + \u5f97/\u4e0d + complement\" structures are segmented into separate words in BU and ROCLING but kept as single items with internal structures in CHTB 10 . The main reason for keeping them together is that the verb and the complement can usually form a single word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "NLP applications have considerations that are not always compatible with human judgment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "In machine translation, it often makes more sense to break up directional compounds into independent words and keep resultative compounds as single words, contrary to the tendencies we observed above. Directional compounds often correspond to verb-preposition sequences in other languages. The compound \"\u8d70\u8fdb\", for example, corresponds to \"walk into\" in English. If \"\u8d70\u8fdb\" is segmented into two words, we will be able to align \"\u8d70\" with \"walk\" and \"\u8fdb\" with \"into\". After seeing other instances of Verb+\u8fdb, such as \"\u8dd1\u8fdb\" (pao-jin: run-enter \"run into\") and \"\u8df3\u8fdb\" (tiao-jin: jump-enter \"jump into\"), we can come to the generalization that Verb+\u8fdb is to be translated as Verb+into in English. If those compounds are reduced to single words, we can still learn the correspondence between \"\u8d70\u8fdb\" and \"walk into\", but the generalization is not so easy to reach. Resultative compounds, on the other hand, are much more likely to correspond to single words in languages that are unrelated to Chinese. \"\u6253\u7834\", for example, will most likely align with \"break\" in English rather than \"hit and break\" or \"break by hitting\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "In the case of \"V + \u5f97 /\u4e0d + complement\" structures, it is important to know the relationship between the verb and the complement. We need a representation where \u5403\u5f97\u4e0b (chi-de-xia: eat-DE-down \"can eat up\"), for instance, can be interpreted as having more or less the same meaning of \"\u80fd \u5403\u4e0b\" (neng-chi-xia: can-eat-down \"can eat up\"). This is crucial not only for semantic analysis, but for such seemingly simple computer applications as various types of Chinese input methods where a language model is used to select the best sequence of characters. Most existing IME systems are error-prone when the input contains the \"V + \u5f97/ \u4e0d + complement\" structure. They are unable to relate the verb and the complement even though the verb-complement bigram is in the language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "To meet the needs of as many standards and applications as possible, our system treats all directional and resultative compounds as single words while preserving their internal 9 \u5b8c can be viewed as an aspectual marker indicating the completion of an action while \u7ed9 may have a role similar to the English \"to\" in dative constructions. 10 Except in cases like \u5403\u4e0d\u4e86 (chi-bu-liao:eat-not-done \"unable to eat anymore\") where V+complement\" is not a legitimate compound. structures. In cases of \"V + \u5f97/\u4e0d + complement\", we also represent the \"lemma\" which is equivalent to \"V + complement\". The result is a word tree, where the root node contains the lemma of the compound.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directional and Resultative Compounding", "sec_num": "2.3" }, { "text": "Both merging and splitting result in word fragments, which often creates a dilemma as to whether to keep those strings as single units or not. We will look at them one by one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging and Splitting", "sec_num": "2.4" }, { "text": "This morphological process, also known as \"telescopic compounding\" [Huang et al. 1997] , can be considered a sub-case of abbreviation, but unlike other kinds of abbreviation, it has a fixed pattern and a predictable semantic interpretation. It applies to cases where two adjacent and semantically related words have some characters in common. The common characters may be at the beginning or end of the words. Here are some examples.", "cite_spans": [ { "start": 67, "end": 86, "text": "[Huang et al. 1997]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Merging", "sec_num": "2.4.1" }, { "text": "\u56fd\u5185+\u56fd\u5916 => \u56fd\u5185\u5916 guo-nei-wai: country-inside-outside \"domestic + foreign\" => \"domestic and foreign\" Common endings (AC+BC => ABC)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common beginnings (AB+AC => ABC)", "sec_num": null }, { "text": "\u8fdb\u53e3+\u51fa\u53e3 => \u8fdb\u51fa\u53e3 jin-chu-kou: enter-exit-port \" import + export\" => \"import and export\" Ending = Beginning (AB+BC => ABC) \u4e0a\u6d77\u5e02+\u5e02\u957f => \u4e0a\u6d77\u5e02\u957f shanghai-shi-zhang: Shanghai-city-head \"Shanghai City + city mayor\" => \"mayor of Shanghai\" All existing standards agree that we have a single word in the AB+AC and AC+BC cases 11 and two words in the AB+BC case. The problem in the first two cases is that, unless we store ABC in the dictionary as a whole, we will not be able to assign good semantic interpretations to them. However, not all words of this kind can be stored in the dictionary, since merging is a productive morphological process. To interpret a newly merged word, such as \u5b58\u8d37\u6b3e (cundai-kuan: deposit-borrow-fund \"deposits and loans\"), which is unlikely to be in the dictionary, we seem to need a level of representation where ABC shows up in its underlying form, i.e. AB AC or AC BC. \u5b58\u8d37\u6b3e should then be represented as \u5b58\u6b3e \u8d37\u6b3e, not at the surface segmentation, but as the \"lemmas\" of \u5b58\u8d37\u6b3e. This is what we do in our system where every merged word contains a tree where the lemmas are conjoined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common beginnings (AB+AC => ABC)", "sec_num": null }, { "text": "11 Unless the sequence is interrupted by a punctuation mark, as in \u56fd\u5185\u3001\u5916 and \u8fdb\u3001\u51fa\u53e3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common beginnings (AB+AC => ABC)", "sec_num": null }, { "text": "Splitting is an active morphological process where a multiple-character word with an internal verb-object structure is split into two non-consecutive parts by the insertion of an aspect marker, a measure word or other functional elements. Here are some examples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Splitting", "sec_num": "2.4.2" }, { "text": "Insertion of an aspect marker", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Splitting", "sec_num": "2.4.2" }, { "text": "xi-le-zao: wash-LE-bath \"took a bath\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6d17\u4e86\u6fa1", "sec_num": null }, { "text": "Insertion of a measure word", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6d17\u4e86\u6fa1", "sec_num": null }, { "text": "xi-ge-zao: wash-one-bath \"take a bath\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6d17\u4e2a\u6fa1", "sec_num": null }, { "text": "Insertion of both an aspect marker and a measure word", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6d17\u4e2a\u6fa1", "sec_num": null }, { "text": "xi-le-ge-zao: wash-LE-one-bath \"took a bath\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6d17\u4e86\u4e2a\u6fa1", "sec_num": null }, { "text": "Insertion of even more words \u6d17\u4e86\u4e2a\u8212\u8212\u670d\u670d\u7684\u6fa1 xi-le-ge-shushufufu-de-zao: wash-LE-one-comfortable-DE-bath \"took a comfortable bath\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6d17\u4e86\u4e2a\u6fa1", "sec_num": null }, { "text": "Most segmentation standards require such expressions to be segmented into multiple words, such as \u6d17 / \u4e86 / \u6fa1. This can result in segments that are not independent words, as we see in the case of \u6fa1 which is a bound morpheme. One may argue that in such cases the bound morpheme is acting as a free morpheme. But it would still be desirable to have a representation which indicates that \u6d17 and \u6fa1 actually form a single word and \u6d17\u4e86\u6fa1 has more or less the same meaning as \u6d17\u6fa1+\u4e86. In other words, the lemma of \u6d17\u4e86\u6fa1 should be \u6d17\u6fa1. Such a representation can be difficult in the case of \u6d17\u4e86\u4e2a\u8212\u8212\u670d\u670d\u7684\u6fa1, but even there \u6d17 and \u6fa1 still form a single unit in some sense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6d17\u4e86\u4e2a\u6fa1", "sec_num": null }, { "text": "The lemma representation of a split word is obviously useful in the realm of information retrieval since it makes it possible to establish links between the split and non-split forms of the same verbs. As in the verb-complement case (2.3), it may also be beneficial to Chinese input methods that use an N-gram language model to select the correct character sequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6d17\u4e86\u4e2a\u6fa1", "sec_num": null }, { "text": "Most existing systems perform poorly when the input contains split words. While the nonsplit forms of those words (such as \u6d17\u6fa1) are usually in the N-gram model, the split forms are not. If future systems employ word segmentation where the split form is recognized as a single unit with its lemma represented, we will be able to relate \u6d17 and \u6fa1 in \u6d17\u4e86\u6fa1 as long as we have the bigram \"\u6d17\u6fa1\" in the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6d17\u4e86\u4e2a\u6fa1", "sec_num": null }, { "text": "A special case of splitting is found in expressions like \u8df3\u8d77\u821e\u6765 (tiao-qi-wu-lai \"start dancing\") where two words (\u8df3\u821e and \u8d77\u6765 in this case) cross each other. Here again we need a level of representation to encode the fact that \u8df3\u8d77\u821e\u6765 actually means \u8df3\u821e+\u8d77\u6765.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6d17\u4e86\u4e2a\u6fa1", "sec_num": null }, { "text": "Our system regards a split word as a single unit with a single lemma and a subtree if the intervening characters are no more than 2. Syntactic analysis treats the unit as a single leaf and has the option of exposing the subtree as part of the parse tree after parsing is done. For cases like \u6d17\u4e86\u4e2a\u8212\u8212\u670d\u670d\u7684\u6fa1, we parse them as separate words and, if \u6fa1 is found to be the object of \u6d17 in the parse, we will concatenate the lemmas of the verb and the object (i.e. \u6d17+ \u6fa1), look up \u6d17\u6fa1 in the dictionary, and make it the lemma of the subtree if it exists as a dictionary entry. This can also be done in the case of \u6d17\u4e86\u6fa1 but we choose to make it a single unit at the lexical level just to reduce the complexity of syntactic analysis. Once its subtree (which also has the verb-object structure in it) is merged into the main parse, we will have a unified representation for \u6d17\u4e86\u6fa1 and \u6d17\u4e86\u4e2a\u8212\u8212\u670d\u670d\u7684\u6fa1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u6d17\u4e86\u4e2a\u6fa1", "sec_num": null }, { "text": "This is an area with the greatest amount of variation among segmentation standards. This is also an area where linguistic theory has very little to say on the justification of a given standard. The differences are mostly computationally motivated and the main concern here is the granularity of segmentation. Different segmentation standards prefer different levels of granularity, but the differences are fairly systematic and can be easily specified in segmentation guidelines. Listed below are the most common types of named entities and factoids whose segmentation may vary across different standards.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entities and factoids", "sec_num": "2.5" }, { "text": "A personal name is usually composed of a first name and a last name. The BU standard segments a Chinese name into these two parts and treats a foreign name as a single unit if the first name and last name are connected by \"\u2022\", as in \"\u8bfa\u7f57\u6566\u2022\u897f\u54c8\u52aa\u514b nuoluodun-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personal names", "sec_num": "2.5.1" }, { "text": "xihanuke \"Norodom Sihanouk\". Other standards treat both Chinese and foreign names as single words. In our system, a personal name is a single word with an internal structure which indicates not only the family name and the given name but the components of the given name as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personal names", "sec_num": "2.5.1" }, { "text": "There are many levels of granularity here. For instance, \"\u6c5f\u82cf\u7701\u76d0\u57ce\u5730\u533a\" (jiangsu-shengyancheng-diqu: Jiangsu-province-Yancheng-prefecture, \"Yancheng Prefecture, Jiangsu", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Place names and organization names", "sec_num": "2.5.2" }, { "text": "Province\") can be segmented as \"\u6c5f\u82cf\u7701\u76d0\u57ce\u5730\u533a\", \"\u6c5f\u82cf\u7701 / \u76d0\u57ce\u5730\u533a\", \"\u6c5f\u82cf\u7701 / \u76d0\u57ce / \u5730\u533a\" or \"\u6c5f\u82cf / \u7701 / \u76d0\u57ce / \u5730\u533a\". Likewise, \"\u4e16\u754c\u8d38\u6613\u7ec4\u7ec7\" (shijie-maoyi-zuzhi: worldtrade-organization \"World Trade Organization\") can be segmented as \"\u4e16\u754c\u8d38\u6613\u7ec4\u7ec7\" or \"\u4e16 \u754c / \u8d38\u6613 / \u7ec4\u7ec7\". Existing standards usually break those names up as long as it does not result in single-character segments. So place names with single-character place-type suffixes (such as \u6c5f\u82cf\u7701) tend to be kept as one word while place names with multiple-character place-type suffixes (such as \u76d0\u57ce\u5730\u533a) will be separate words. The BU standard has additional annotation to represent the internal structure of place names. \"\u4e16\u754c\u8d38\u6613\u7ec4\u7ec7\", for example, is tagged as [\u4e16\u754c/n \u8d38\u6613/n \u7ec4\u7ec7/n]nt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Place names and organization names", "sec_num": "2.5.2" }, { "text": "Each level of granularity has its pros and cons. On the one hand, \"\u4e16\u754c\u8d38\u6613\u7ec4\u7ec7\" has a better chance of being aligned with \"WTO\" in the automatic acquisition of translation knowledge if it is segmented as one word. On the other hand, \"\u6c5f\u82cf\u7701\" can be more easily related to \"\u6c5f\u82cf\" in information retrieval or automatic summarization if it is segmented into two words. All of this points to the need of a hierarchical structure for all the place names and organization names that contain multiple words. This is what has been done in our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Place names and organization names", "sec_num": "2.5.2" }, { "text": "Word trees are also needed for numbers and other factoids. The reasons are obvious and therefore we will simply list some common cases where internal structures exist and different kinds of segmentation are possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factoids", "sec_num": "2.5.3" }, { "text": "\u2022 Numbers \u56db\u767e\u4e94\u5341 si-bai-wu-shi-liu: four-hundred-five-ten-six \" four hundred and fifty-six\" \u56db\u767e\u4e94\u5341\u516d; \u56db\u767e / \u4e94\u5341\u516d; \u56db\u767e / \u4e94\u5341 / \u516d; \u56db / \u767e / \u4e94 / \u5341 / \u516d \u4e09\u5206\u4e4b\u4e00 san-fen-zhi-yi: three-divide-ZHI-one \"one third\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factoids", "sec_num": "2.5.3" }, { "text": "\u4e09\u5206\u4e4b\u4e00; \u4e09 / \u5206\u4e4b / \u4e00; \u4e09 / \u5206 / \u4e4b / \u4e00", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factoids", "sec_num": "2.5.3" }, { "text": "\u4e09\u5341\u591a san-shi-duo: three-ten-more \"thirty or so\" \u4e09\u5341\u591a; \u4e09\u5341 / \u591a; \u4e09 / \u5341 / \u591a; \u6570\u5343 shu-qian:several-thousand \"several thousand\" \u6570\u5343; \u6570 / \u5343", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factoids", "sec_num": "2.5.3" }, { "text": "\u4e00\u4e5d\u4e5d\u4e03\u5e74\u4e09\u6708\u4e94\u65e5 yijiujiuqi-nian-san-yue-wu-ri: 1997-year-3-month-5-date \"March 5, 1997\" \u4e00\u4e5d\u4e5d\u4e03\u5e74\u4e09\u6708\u4e94\u65e5; \u4e00\u4e5d\u4e5d\u4e03\u5e74 / \u4e09\u6708 / \u4e94\u65e5; \u4e00\u4e5d\u4e5d\u4e03 / \u5e74 / \u4e09 / \u6708 / \u4e94 / \u65e5;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Dates", "sec_num": null }, { "text": "\u2022 Time \u5341\u70b9\u96f6\u4e94 shi-dian-ling-wu-fen: ten-clock-zero-five-minute \"five minutes past ten\" \u5341\u70b9\u96f6\u4e94\u5206; \u5341\u70b9 / \u96f6 / \u4e94\u5206; \u5341 / \u70b9 / \u96f6 / \u4e94 / \u5206;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Dates", "sec_num": null }, { "text": "\u2022 Money \u516d\u5757\u4e5d\u6bdb\u4e09 liu-kuai-jiu-mao-san: six-dollar-nine-dime-three \"Six dollars and ninety-three cents\" \u516d\u5757\u4e5d\u6bdb\u4e09; \u516d\u5757 / \u4e5d\u6bdb / \u4e09; \u516d / \u5757 / \u4e5d / \u6bdb / \u4e09", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Dates", "sec_num": null }, { "text": "\u4e09\u6bd4\u4e00 san-bi-yi: three-match-one \"three to one\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Scores", "sec_num": null }, { "text": "\u4e09\u6bd4\u4e00; \u4e09 / \u6bd4 / \u4e00 \u2022 Range", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Scores", "sec_num": null }, { "text": "\u4e09\u81f3\u4e94\u5929 san-zhi-wu-tian: three-to-five-day \"three to five days\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Scores", "sec_num": null }, { "text": "\u4e09\u81f3\u4e94 / \u5929; \u4e09 / \u81f3 / \u4e94 / \u5929; \u4e09 / \u81f3 / \u4e94\u5929", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Scores", "sec_num": null }, { "text": "These are just simple cases. The structure can be much more complicated when one kind of named entity is embedded in another. However, no matter how complicated they are, clear guidelines can be set up to make them segmented consistently as long as their internal structures are available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Scores", "sec_num": null }, { "text": "In this section, we give a detailed description of how our system has been designed to address the problems and requirements discussed in the previous section. We will see how the wordinternal structures are built, how the system can be customized to produce different outputs, and what the initial evaluation results are. Suggestions will also be made as to how the design principle here can be applied to the development of annotated corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Customizable System", "sec_num": "3." }, { "text": "There are two types of words in our system: static words and dynamic words. Generally speaking, static words are those words that are stored in the dictionary while dynamic words are constructed at run time. All the MDWs belong in the category of dynamic words. These words are not supposed to be stored as headwords in our lexicon. Instead, they are to be built dynamically during sentence analysis through the application of a set of word-formation rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Words", "sec_num": "3.1" }, { "text": "There are about 50 word-formation rules in our system, covering all the cases listed in Section 2 and more 12 . They are augmented phrase structure rules that have the form of A(conditions)+B(conditions) => C{actions} and each rule has a unique name that describes the particular morphological process involved. The rules are executed like a small grammar in a morphological parser before sentence-level parsing begins. They interact with each other, with some rules feeding into others, but they do not interact with the grammar rules used in sentence analysis. 13 The derivational history from the rule application then forms a tree that represents the internal structure of a given word. Figure 1 is the word tree for a fictional organization name, where the labels of non-terminal nodes represent the rules that are applied in constructing the tree. ", "cite_spans": [ { "start": 563, "end": 565, "text": "13", "ref_id": null } ], "ref_spans": [ { "start": 691, "end": 699, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Dynamic Words", "sec_num": "3.1" }, { "text": "Trees of this kind are built for all types of MDWs, so that all of them can be treated as single words if necessary. These \"maximal word trees\" or \"maximal words\" are submitted to the sentence parser as single lexical units, which significantly reduces parsing complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "In all cases of merging, splitting and reduplication, the feature structure of the parent node also has an attribute that holds the lemma of the word, as we have already mentioned in Section 2. The value of the lemma is computed by piecing together the relevant characters to hypothesize a word and then checking this word against the dictionary. In the case of AABB reduplication, for instance, the hypothesized word will be AB, such as \u6e05\u695a in \u6e05\u6e05\u695a\u695a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Since \u6e05\u695a is a word in the dictionary, it becomes the value of the lemma attribute of \u6e05\u6e05\u695a \u695a. Similarly, the lemma of \u6d17\u4e86\u6fa1 is \u6d17\u6fa1. 14 In the case of AC+BC => ABC merging, both AC and BC will be hypothesized and put into the lemma attribute of ABC if verified in the dictionary. For example, the lemma of \u8fdb\u51fa\u53e3 is \u8fdb\u53e3+\u51fa\u53e3. These operations all take place in the \"actions\" part on the right-hand side of the rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "An interesting question that arises naturally at this point is what words should be listed in the dictionary. According to our design, none of the MDWs should go into the dictionary. This way the word trees we get will have the maximal word at the top node, the minimal words at the leaves, and the intermediate words at the other nodes. We can thus accommodate the widest range of segmentation variations. In practice, however, there are some complications that need to be dealt with.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "First of all, none of the existing dictionaries has been built strictly in line with this \"minimal word\" principle. They do have the minimal words, but they usually also contain words that are supposed to be dynamic in our system. It is not hard to imagine that a dictionary may contain words like \u610f\u5927\u5229\u5f0f (yidali-shi:Italy-style \"Italian-style\", \u642c\u8fdb (juanjin:move-enter \"move into\"), and \u4e2d\u5c0f\u5b66 (zhong-xiao-xue:middle-small-school \"middle school and elementary school\"). Since our original dictionary was acquired rather than created in house, we do have this problem. We do not add any MDW to our dictionary, but we have to find a way to deal with those words that are already in the lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "The easy way out is to leave the existing dictionary alone, with the assumption that words like \u610f\u5927\u5229\u5f0f, \u8d70\u8fdb, and \u8fdb\u51fa\u53e3 are lexicalized in the dictionary because they have been lexicalized in a Chinese speaker's mind. We can also assume that they are all highfrequency words or words with strong mutual information between their components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Therefore they should stay unsegmented for probabilistic reasons. Yet another assumption is that the dictionary has listed all the exceptional MDWs that should never be segmented. If any of these assumptions turns out to be true, we should respect the dictionary entries, regarding every word in the dictionary as a minimal word, and build word trees only for words that are not in the dictionary. These assumptions do not always hold, of course. We do find many dictionary words that can be further segmented. The solution we adopted is to keep those MDWs in the dictionary while assigning internal structures to them at run time. For all the lexicalized words that need internal structures, we mark them with two simple attributes: Type and Segs. 15 The value of Type is the name of the rule that would have been used to construct the word dynamically had this word not been lexicalized. Segs marks the potential internal word boundaries in the word. For \u8bed\u8a00\u5b66 (yuyan-xue, language-study, \"linguistics\"), for example, we will have Type = \"NounSfx\" and Segs = \"\u8bed\u8a00_\u5b66\". With these two pieces of information, we are able to reconstruct the internal word tree at run time. In terms of structure, therefore, a lexicalized \u8bed\u8a00\u5b66 will be identical to a dynamically constructed \u8bed\u8a00\u5b66. This enables us to handle all MDWs in a unified way in later stages of processing, regardless of whether they are from the lexicon or from the rules.", "cite_spans": [ { "start": 749, "end": 751, "text": "15", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Once every MDW is assigned a word tree representing its internal structure, how to segment those words becomes merely a display problem, since different segmentations of the same word can now be obtained by taking different cuts of the word tree. Borrowing a term from the graphical world, we can say that we just have to decide on the degree of \"resolution\" in displaying the internal structure or the granularity of output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "To control the resolution, we let every non-terminal node in the tree be associated with a multi-resolution parameter. Since every non-terminal node corresponds to a word formation rule with which the node was built, the parameter is in effect associated with a given rule or a particular type of morphological process. In the current system, those parameters are binaryvalued: 0 if the daughters of a node are to be displayed as a single word and 1 if they are to be displayed as separate words. To illustrate this, we go back to the MDW in Figure 1 : \u8d75\u5143\u4efb \u8bed\u8a00\u5b66\u57fa\u91d1\u4f1a. We find four different types of node labels in its word tree -OrgName, NounSfx, FullName and GivenNamewhich are the names of the rules that are used to construct this MDW. Each of them has a multi-resolution parameter: P(OrgName), P(NounSfx), P(FullName) and P(GivenName). Different settings of those parameters then result in different granularities of segmentation:", "cite_spans": [], "ref_spans": [ { "start": 542, "end": 550, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u2022 P(OrgName) = 0: \u8d75\u5143\u4efb\u8bed\u8a00\u5b66\u57fa\u91d1\u4f1a \u2022 P(OrgName) = 1; P(NounSfx) = 0; P(FullName) = 0:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u8d75\u5143\u4efb / \u8bed\u8a00\u5b66 / \u57fa\u91d1\u4f1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u2022 P(OrgName) = 1; P(NounSfx) = 1; P(FullName) = 0:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u8d75\u5143\u4efb / \u8bed\u8a00 / \u5b66 / \u57fa\u91d1 / \u4f1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u2022 P(OrgName) = 1; P(NounSfx) = 0; P(FullName) = 1; P(GivenName) = 0:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u8d75 / \u5143\u4efb / \u8bed\u8a00\u5b66 / \u57fa\u91d1\u4f1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u2022 P(OrgName) = 1; P(NounSfx) = 0; P(FullName) = 1; P(GivenName) = 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u8d75 / \u5143 / \u4efb / \u8bed\u8a00\u5b66 / \u57fa\u91d1\u4f1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u2022 P(OrgName) = 1; P(NounSfx) = 1; P(FullName) = 1; P(GivenName) = 0:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u8d75 / \u5143\u4efb / \u8bed\u8a00 / \u5b66 / \u57fa\u91d1 / \u4f1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u2022 P(OrgName) = 1; P(NounSfx) = 1; P(FullName) = 1; P(GivenName) = 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "\u8d75 / \u5143 / \u4efb / \u8bed\u8a00 / \u5b66 / \u57fa\u91d1 / \u4f1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "We notice that the values of these parameters are not independent in a given structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "When the parameter of a node is set to 0, the parameter values of all the nodes dominated by that node must be 0 as well. It is impossible to keep a MDW as a single word while separating some of its sub-words at the same time. The value of a parameter can be 1 only if the parameter of its parent node is set to 1. Therefore, although we have about 50 rules and consequently about 50 parameters, there do not exist 2 50 different ways of segmenting sentences even theoretically. But we do provide enough options to adapt the segmentation to any reasonable standard. A user of our system can set those parameters according to any specification to produce the desired segmentation without making any modification in the system itself. The system is thus easily customizable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "Our current system also provides a parameter whose value determines whether word length is to be taken into consideration. As we have seen in Sections 2.3, words formed through directional and resultative compounding are sensitive to word length when it comes to segmentation. These MDWs are more likely to be treated as single words if it has fewer than three characters. The additional parameter covers this case. When it is set to 1, all MDWs built through derivational and resultative compounding will be segmented into separate words if it contains more than two characters, regardless of the values of other parameters. Suppose the name of the directional compounding rule is \"DirCmpd\". When the length parameter is set to 0, \u8d70\u8fdb and \u8d70\u8fdb\u6765 will both be kept as single words if P(DirCmpd) is set to 0. They will be segmented into two words if P(DirCmpd) is set to 1. When the length parameter is set to 1, however, \u8d70\u8fdb will be kept as one word but \u8d70\u8fdb\u6765 will be cut into two words even if P(DirCmpd) is set to 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "We also added a parameter whose value determines whether the lemma or the surface string of a MDW is to be displayed. When this parameter is set to 1, the lemma will be displayed and \u8df3\u8d77\u821e\u6765 will be displayed as \u8df3\u821e \u8d77\u6765. This is of course more like stemming than word segmentation, but this is a functionality that some applications may require. In fact, this might be one of the steps we have to take to go from the \"truthful\" level of segmentation to the \"graceful\" level [Huang et al. 1997] .", "cite_spans": [ { "start": 469, "end": 488, "text": "[Huang et al. 1997]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-resolution parameters", "sec_num": "3.2" }, { "text": "To find out the degree of customization that can be achieved by the parameterization described above, we evaluated our system against two annotated corpora that were made publicly available for SIGHAN's First International Chinese Word Segmentation Bakeoff: the training data of the Penn Chinese Tree Bank and the Beijing University Institute of Computational Linguistics Corpus. These two annotated corpora follow very different guidelines and it should be interesting to see how well our system can adapt to them. The evaluation metric we used to measure our performance was the scoring tool written by Richard Sproat for the First International Chinese Word Segmentation Bakeoff. This scoring tool measures word recall, word precision, the F-measure, the OOV rate, and the OOV recall rate, among other things. Given a reference (the gold standard) and a hypothesis (the segmentation hypothesized by the word segmenter), word recall is the percentage of words in the reference that are also in the hypothesis, and word precision is the percentage of words in the hypothesis that are also in the reference. The F-measure is a simple average of precision and recall. The OOV rate is the percentage of words in the reference that are not found in the dictionary, and the OOV recall rate is the percentage of OOV words that are found in the hypothesis. The OOV scores are of interest in this paper because many of the OOV words are", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "MDWs according to our dictionary and the OOV recall rate tells us how many OOV words are covered by the word-formation rules. The wordlist used in running the scoring tool consists of all the 89,845 entries in our dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "In the evaluation, we first segmented the text using our default setting where every parameter was set to 0. This gave us the maximal word in each case. We then did a quick resetting of the parameters following the relevant guidelines. Results of both the default segmentation and the adjusted segmentation were evaluated against the CHTB and BU gold standards respectively. The differences between the default setting scores and the scores after parameter value adjustment thus reflect the amount of customization that has been achieved:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "When evaluated against the CHTB gold standard, our system received the following scores when the default setting was used: We see that the scores improved dramatically across the board in both the CHTB and BU data after the parameter values were adjusted to the relevant standards. In particular, there is a high correlation between the rise of OOV Recall Rate and the F-measure, which indicates that the improvements indeed came from the area of MDWs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "We also tried the setting where every parameter was set to 1, which resulted in the display of minimal words. This is the result we would get if we depended only on our dictionary and no MDWs rules were applied. The scores dropped sharply in both the CHTB and BU cases. Of particular interest is the drop in the OOV recall rates. If all the OOV words were constructed by MDW rules, the OOV recall rate would be 0 when we display the minimal words, which are all in the dictionary. However, there are other processes in our system that assemble dictionary words into bigger units and these units are invariably displayed as single words. For example, \"1978\" always appears as a single word in spite of the fact that it is assembled from \"1\", \"9\", \"7\" and \"8\" at run time. Another example is English words in Chinese texts, such as \"IBM\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "which is not in our dictionary. MDWs thus account for 85.8% of the OOV recall rate in CHTB and 73.1% of the OOV recall rate in BU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "The evaluation results show clearly that (1) the variation among different standards does come largely from the area of MDWs and (2) our system can adapt to different standards successfully by parameterizing the display of MDWs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "So far we have focused on the customization of a single segmentation system to produce different outputs. We can also envision an approach where segmenters for different standards are built by training them on texts that have been segmented according to those standards.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Customizable resources", "sec_num": "3.4" }, { "text": "This leads to the question of whether we can develop language resources that can be customized to serve different purposes. The annotated corpora that are currently being developed in the Chinese NLP community mostly follow a single standard and they are usually not designed for the training of segmenters that do not follow the same standard. However, we cannot afford to build a different tagged corpus for each different standard. It will be highly desirable, therefore, to develop resources that are customizable. The requirement for segmented texts, then, is that it should be capable of being converted to segmentations of varying granularity. To achieve this goal, we have to tag our texts in such a way that (1) the internal structures of words (at least the MDWs) are represented and (2) word boundaries of different types can be selectively kept or removed with ease.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Customizable resources", "sec_num": "3.4" }, { "text": "Certain word-internal structures are already preserved in some annotated corpora. In CHTB, for example, verbs and their directional/resultative complements are grouped into single units with internal word boundaries. \u8d70\u8fdb\u53bb is thus tagged as \"(\u8d70 \u8fdb\u53bb)\" and \u8d70\u4e0d\u8fdb \u53bb as \"(\u8d70 \u4e0d \u8fdb\u53bb)\". The bracketing of named entities in the BU corpora is another step in this direction. The ROCLING standard has set even higher goals. It classifies segmentation into three increasingly demanding levels: faithful (\u4fe1 xin), truthful (\u8fbe da) and graceful (\u96c5 ya) [Huang et al. 1997] . 16 The segmentation units at the faithful level basically correspond to the minimal words in our system. Those at the truthful level are usually MDWs. Segmentation units at the graceful level are not as well defined, but some of them correspond to the maximal words in our system, such as company names. Units at these levels are to be tagged with different SGML tags: faithful-level words tagged as , truthful-level words tagged as , and graceful-level words tagged as . \"\u8d75\u5143\u4efb\u8bed\u8a00\u5b66\u57fa\u91d1\u4f1a\" will probably be tagged as the following in this scheme, assuming \u8d75, \u5143 and \u4efb are in the dictionary but \u8d75\u5143\u4efb and \u5143\u4efb are not: \u8d75 \u5143 \u4efb \u8bed\u8a00 \u5b66 \u57fa\u91d1 \u4f1a ", "cite_spans": [ { "start": 530, "end": 549, "text": "[Huang et al. 1997]", "ref_id": "BIBREF7" }, { "start": 552, "end": 554, "text": "16", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Customizable resources", "sec_num": "3.4" }, { "text": "This tagging scheme makes the tagged data customizable, since all the potential word boundaries are preserved. But it does not distinguish between different types of MDWs and therefore the choices for customization are more limited. To preserve the type information of MDWs, we will need the following representation: \u8d75 < Char >\u5143 < Char >\u4efb < /GivenName > \u8bed\u8a00 \u5b66 < NounSfx > < Noun >\u57fa\u91d1 < Suffix >\u4f1a This representation is equivalent to the word tree in Figure 1 . It is somewhat clumsy, however, and may not be optimal when it comes to large-scale tagging. A simpler representation might be:", "cite_spans": [], "ref_spans": [ { "start": 652, "end": 660, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Customizable resources", "sec_num": "3.4" }, { "text": "where each number corresponds to a label, namely 1 = OrgName, 2 = NounSfx, 3 = Fullname, and 4 = GivenName. Since each label represents the morphological rule that assembles the pieces into a single unit, we replace each word-internal boundary with the relevant number that corresponds to the rule that puts the pieces together. We can then obtain different segmentations by specifying the types of boundaries to be kept or removed. During customization, the boundaries to be kept will be replaced by spaces and the ones to be removed will disappear. In the above example, if we want to treat personal names and words derived from suffixation as single words while keeping components of an organization name apart, we can remove <2>, <3> and <4> and turn the other numbers into spaces. The result will be \"\u8d75\u5143\u4efb \u8bed\u8a00\u5b66 \u57fa\u91d1\u4f1a\". We will get \"\u8d75 \u5143\u4efb \u8bed\u8a00 \u5b66 \u57fa\u91d1 \u4f1a\" if the number to be removed is just 4. It should be noted that, just like the case of parameter setting in our system, not all the number combinations are possible in the replacement/removal. For example, we cannot remove <1> and replace all the other numbers with spaces, since we cannot keep the whole organization name as a single piece if we break up its components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u8d75<3>\u5143<4>\u4efb<1>\u8bed\u8a00<2>\u5b66<1>\u57fa\u91d1<2>\u4f1a", "sec_num": null }, { "text": "Therefore, there need to be a partial order of those numbers where the removal of a given number implies the removal of some other numbers. The original motivation of this representation was to avoid the need to process the same text N times to get N different segmentations. We were able to process the corpus just once and use the same output for multiple purposes. It seems that this can be an option in the future development of Chinese language resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u8d75<3>\u5143<4>\u4efb<1>\u8bed\u8a00<2>\u5b66<1>\u57fa\u91d1<2>\u4f1a", "sec_num": null }, { "text": "In principle, all the information represented in the word trees of our system can be represented in a tagged corpus. In practice, however, textual representation of certain information (e.g. the lemma attribute) can be cumbersome and it can be labor-intensive for the annotators. Besides, the tagging is not easy to change once it is done. The main advantage of a customizable system over a customizable corpus is that the former can adapt to new specifications of representation very quickly, with large-scale systematic changes made within a very short time. This is especially so in cases of \"bracketing paradoxes\" where incompatible representations might have to be generated for different purposes. Of course, the output of an automatic system may be inferior in accuracy to a hand-tagged corpus, but we can maintain a set of surface sentences which are known to have the correct output from the system. Every time the \"spec\" changes, we can modify the system and process those sentences again to produce the updated output instead of modifying the whole tagged corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u8d75<3>\u5143<4>\u4efb<1>\u8bed\u8a00<2>\u5b66<1>\u57fa\u91d1<2>\u4f1a", "sec_num": null }, { "text": "In our current implementation of the multi-resolution parameters, the parameter values are not probabilistic in nature. They are either 0 or 1 and therefore it is not able to make the finer distinctions that we sometimes need when we try to determine wordhood on the basis of statistical information. As we have seen in Section 2, the segmentation of certain MDWs can depend on the frequency of those MDWs and the mutual information between their components. To make our customization more fine-tuned, we need to take such probabilistic information into account. One way to do it is to gather statistical information for every MDW and normalize it into a value between 0 and 1. This value can then be combined with the parameter values that we set by hand to produce a probability that represents the likelihood of a MDW being broken into individual words. We can then set a threshold to determine the \"resolution\" of the segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future refinement", "sec_num": "3.5" }, { "text": "The standards for Chinese word segmentation can vary according to different definitions of words and the different requirements of NLP applications. It is therefore important that the segmentation systems we develop or the tagged corpora we construct be capable of being customized to meet different needs. In this paper, we have concentrated on the segmentation of morphologically derived words (MDWs). We have demonstrated that a segmentation system can be customized to produce different outputs for different standards if the wordinternal structures of MDWs are preserved in a tree structure and different types of nodes in the tree are associated with different resolution parameters. Different settings of those parameters then result in segmentations of different granularities. Evaluation shows that the effect of customization is significant and MDWs are indeed the main area where customization is most needed. A similar approach can also be used in the development of linguistic resources where a single annotated corpus can be customized to provide training and testing data for different applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4." }, { "text": "For a comprehensive review of this problem, seePackard [2000].4 This system is developed at Microsoft Research in the general framework of Jensen et al[1993] andHeidorn [2000]. Details of the Chinese system can be found inWu et al [2000Wu et al [ , 1998.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The weights in the word lattice are considered in the selection of the best parse.6 The meaning of AA is not \"A and A\". The verb or adjective is duplicated here to represent certain grammatical aspects, such as short duration or attempted action.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Some of these rules assemble unknown words that are not discussed in Section 2.13 We do have the option to run these rules together with the grammar rules, but that has been found to affect the system negatively both in efficiency and accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In addition to the lemma, we also have attributes that record the information associated with the inserted part. In \u6d17\u4e86\u6fa1, we store the tense/aspect information contributed by \u4e86, so that \u6d17\u4e86\u6fa1 as a single verb will be equivalent to \u6d17\u4e86\u6fa1 as a verb phrase in terms of semantic content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The addition of such information to the dictionary was done semi-automatically. We automatically extracted from the dictionary candidates for a given type of MDWs and then had a human evaluator remove the invalid ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "\u770b\u770b kan-kan: look-look \"take a look\" \u7ea2\u7ea2 hong-hong: red-red \"very red / kind of red\" \u6162\u6162 man-man: slow-slow \"slowly\" \u5e74\u5e74 nian-nian: year-year \"every year\" \u2022 ABAB \u7814\u7a76\u7814\u7a76 yanjiu-yanjiu: research-research \"do some research\" \u8212\u670d\u8212\u670d shufu-shufu: comfortable-comfortable \"have a comfortable time\"\u2022 AABB \u65b9\u65b9\u9762\u9762 fang-fang-mian-mian \"every aspect\" \u6e05\u6e05\u695a\u695a qing-qing-chu-chu \"very clear\" \u75db\u75db\u5feb\u5feb tong-tong-kuai-kuai \"thoroughly\" \u5e74\u5e74\u6708\u6708 nian-nian-yue-yue: year-year-month-month \"year after year, month after month\"\u2022 AXA \u8bd5\u4e00\u8bd5 shi-yi-shi: try-one-try \"give it a try\" \u8bd5\u4e86\u8bd5 shi-le-shi: try-LE-try \"gave it a try\" \u8bd5\u4e86\u4e00\u8bd5 shi-le-yi-shi: try-LE-one-try \"gave it a try\" \u2022 AXAY \u8dd1\u6765\u8dd1\u53bb pao-lai-pao-qu: run-come-run-go \"run around\" \u9001\u533b\u9001\u836f song-yi-song-yao:send-doctor-send-medicine \"deliver medical aid\" \u4e00\u7816\u4e00\u74e6 yi-zhuan-yi-wa: one-brick-one-tile \"every brick / brick by brick\" \u6240\u8a00\u6240\u884c suo-yan-suo-xing: SUO-speak-SUO-do \"every word and deed\" \u2022 XAYA \u4e1c\u770b\u897f\u770b dong-kan-xi-kan: east-look-west-look \"look here and there\" \u5de6\u6311\u53f3\u6311 zuo-tiao-you-tiao: pick-left-pick-right \"pick and choose\" \u2022 AA \u770b \u8bd5\u8bd5\u770b shi-shi-kan: try-try-look \"give it a try\" \u2022 AAB \u5145\u5145\u7535 chong-chong-dian: \"charge the battery a bit\" \u6e9c\u6e9c\u5149 liu-liu-guang \"very smooth\" \u2022 ABB \u4eae\u5802\u5802 liang-tang-tang \"very bright \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I. Examples of reduplication \u2022 AA", "sec_num": null }, { "text": "\u2022 Prefix + Noun => Noun \u5fae\u7535\u5b50 wei-dianzi \"micro-electronics\" \u2022 Prefix + Noun => Adj \u9632\u75c5\u6bd2(\u8f6f\u4ef6) fang-bingdu \" anti-virus\" \u2022 Prefix + Verb => Adj \u53ef\u518d\u751f (\u80fd\u6e90) ke-zaisheng \"re-usable\"\u90ae\u9012\u5458 youdi-yuan \"mail-man\" \u2022 Verb + Suffix => Adj \u6e10\u8fdb\u5f0f jianjin-shi \"gradual-mode\" \u2022 Adj + Suffix => Noun \u79ef\u6781\u6027 jiji-xing \"proactive-ness\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prefixation", "sec_num": "1." } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Syntactic, morphological and phonological words in Chinese", "authors": [ { "first": "J", "middle": [ "X" ], "last": "Dai", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "103--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dai, J. X.-L, \"Syntactic, morphological and phonological words in Chinese\", in Packard (1997), pp. 103-134.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Chinese Morphology and its Interface with the Syntax", "authors": [ { "first": "J", "middle": [ "X" ], "last": "Dai", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dai, J. X.-L Chinese Morphology and its Interface with the Syntax, Ph.D. thesis, The Ohio State Univeristy, Columbus, OH, 1992.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "On the Definition of Word", "authors": [ { "first": "Di", "middle": [], "last": "Sciullo", "suffix": "" }, { "first": "A", "middle": [ "M" ], "last": "", "suffix": "" }, { "first": "E", "middle": [], "last": "Williams", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Di Sciullo, A. M. and E. Williams, On the Definition of Word. MIT Press, Cambridge, MA, 1987.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Wordhood in Chinese", "authors": [ { "first": "S", "middle": [], "last": "Duanmu", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "135--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duanmu, S., \"Wordhood in Chinese\". In Packard (1997) pp.135-196.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Contemporary Chinese language word-segmentation for information processing", "authors": [], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "GB/T 13715-92. Contemporary Chinese language word-segmentation for information processing. Technical report, Beijing, 1993.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Intelligent writing assistance", "authors": [ { "first": "G", "middle": [ "E" ], "last": "Heidorn", "suffix": "" } ], "year": 2000, "venue": "A Handbook of Natural Language Processing: Techniques and Applications for the Processing of Language as Text", "volume": "", "issue": "", "pages": "181--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heidorn, G. E., \"Intelligent writing assistance\", in A Handbook of Natural Language Processing: Techniques and Applications for the Processing of Language as Text , Dale R., Moisl H., and Somers H. eds., Marcel Dekker, New York, 2000, pp. 181-207.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Segmentation standard for Chinese natural language processing", "authors": [ { "first": "C", "middle": [], "last": "Huang", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "F", "middle": [], "last": "Chen", "suffix": "" }, { "first": "L", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1997, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "2", "issue": "2", "pages": "47--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C., K. Chen, F. Chen and L. Chang, Segmentation standard for Chinese natural language processing. International Journal of Computational Linguistics and Chinese Language Processing, 2(2), 1997, pp. 47-62.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Natural Language Processing: the PLNLP Approach", "authors": [ { "first": "K", "middle": [], "last": "Jensen", "suffix": "" }, { "first": "G", "middle": [], "last": "Heidorn", "suffix": "" }, { "first": "S", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jensen, K., G. Heidorn and S. Richardson. Natural Language Processing: the PLNLP Approach\". Kluwer Academic Publishers, Boston, 1993.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "New Approaches to Chinese Word Formation: Morphology, phonology and the lexicon in modern and ancient Chinese. Trends in Linguistics Studies and Monographs 105", "authors": [ { "first": "J", "middle": [], "last": "Packard", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Packard, J. (ed.), New Approaches to Chinese Word Formation: Morphology, phonology and the lexicon in modern and ancient Chinese. Trends in Linguistics Studies and Monographs 105. Mouton de Gruyter, Berlin and New York, 1997.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Morphology of Chinese: A Linguistic and Cognitive Approach", "authors": [ { "first": "J", "middle": [], "last": "Packard", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Packard, J., The Morphology of Chinese: A Linguistic and Cognitive Approach. Cambridge University Press, Cambridge, 2000.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "ROCLING Segmentation Principle for Chinese Language Processing", "authors": [], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ROCLING Segmentation Principle for Chinese Language Processing, 1997, http://godel.iis.sinica.edu.tw/ROCLING/juhuashu1.htm", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The Syntax of Words", "authors": [ { "first": "E", "middle": [], "last": "Selkirk", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Selkirk, E., The Syntax of Words. The MIT Press, Cambridge, MA, 1982.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Corpus-Based Methods in Chinese Morphology", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2002, "venue": "Tutorial at the 19 th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, R., Corpus-Based Methods in Chinese Morphology. Tutorial at the 19 th International Conference on Computational Linguistics, 2002.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A Computational Theory of Writing Systems", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, R., A Computational Theory of Writing Systems. Cambridge University Press, Stanford, CA, 2000.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A stochastic finite-state word-segmentation algorithm for Chinese", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "C", "middle": [], "last": "Shih", "suffix": "" }, { "first": "W", "middle": [], "last": "Gale", "suffix": "" }, { "first": "N", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "3", "pages": "377--404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, R., C. Shih, W. Gale and N. Chang, \"A stochastic finite-state word-segmentation algorithm for Chinese\". Computational Linguistics, 22(3), 1996, pp. 377-404.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A statistical method for finding word boundaries in Chinese text", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "C", "middle": [], "last": "Shih", "suffix": "" } ], "year": 1990, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "4", "issue": "", "pages": "336--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, R. and C. Shih, \"A statistical method for finding word boundaries in Chinese text\", Computer Processing of Chinese and Oriental Languages, Vol. 4, 1990, pp. 336-351.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Statistically-Enhanced New Word Identification in a Rule-based Chinese System", "authors": [ { "first": "A", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Jiang", "suffix": "" }, { "first": ";", "middle": [], "last": "Hkust", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Kong", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Second ACL Chinese Processing Workshop", "volume": "", "issue": "", "pages": "46--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, A. and Z. Jiang, \"Statistically-Enhanced New Word Identification in a Rule-based Chinese System\". In Proceedings of the Second ACL Chinese Processing Workshop, HKUST, Hong Kong, 2000, pp. 46-51.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Word Segmentation in Sentence Analysis", "authors": [ { "first": "A", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 1998 International Conference on Chinese Information Processing", "volume": "", "issue": "", "pages": "169--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu A. and Z. Jiang, \"Word Segmentation in Sentence Analysis\". In Proceedings of the 1998 International Conference on Chinese Information Processing, Beijing, China, 1998, pp. 169-180.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The Segmentation Guidelines for the Penn Chinese Treebank (3.0)", "authors": [ { "first": "F", "middle": [], "last": "Xia", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xia, F., The Segmentation Guidelines for the Penn Chinese Treebank (3.0). Technical report, University of Pennsylvania, 2000, http://www.cis.upenn.edu/~chinese/.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Guidelines for the Annotation of Contemporary Chinese Texts: word segmentation and POS-tagging", "authors": [ { "first": "S", "middle": [], "last": "Yu", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu, S., Guidelines for the Annotation of Contemporary Chinese Texts: word segmentation and POS-tagging, Institute of Computational Linguistics, Beijing University, Beijing, 1999", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "\u8d75\u5143\u4efb\u8bed\u8a00\u5b66\u57fa\u91d1\u4f1azhao-yuanren-yuyan-xue-jijin-hui:Zhao-Yuanren-language-science-fund-committee \"Yuan-Ren Chao Linguistics Foundation\"", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "16 (a) Faithful (\u4fe1 xin): All segmentation units listed in the reference lexicon should be successfully segmented; (b)Truthful (\u8fbe da): In addition to (a), all segmentation units derivable by morphological rules should be successfully segmented; Graceful (\u96c5 ya): Segmentation units are ideal linguistic words for fully automated language understanding.", "uris": null, "num": null } } } }