{ "paper_id": "O04-2003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:00:22.865876Z" }, "title": "Auto-Generation of NVEF Knowledge in Chinese", "authors": [ { "first": "Jia-Lin", "middle": [], "last": "Tsai", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Nankang, Taipei", "country": "Taiwan, R.O.C" } }, "email": "tsaijl@iis.sinica.edu.tw" }, { "first": "Gladys", "middle": [], "last": "Hsieh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Nankang, Taipei", "country": "Taiwan, R.O.C" } }, "email": "gladys@iis.sinica.edu.tw" }, { "first": "Wen-Lian", "middle": [], "last": "Hsu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Nankang, Taipei", "country": "Taiwan, R.O.C" } }, "email": "hsu@iis.sinica.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Noun-verb event frame (NVEF) knowledge in conjunction with an NVEF word-pair identifier [Tsai et al. 2002] comprises a system that can be used to support natural language processing (NLP) and natural language understanding (NLU). In [Tsai et al. 2002a], we demonstrated that NVEF knowledge can be used effectively to solve the Chinese word-sense disambiguation (WSD) problem with 93.7% accuracy for nouns and verbs. In [Tsai et al. 2002b], we showed that NVEF knowledge can be applied to the Chinese syllable-to-word (STW) conversion problem to achieve 99.66% accuracy for the NVEF related portions of Chinese sentences. In [Tsai et al. 2002a], we defined a collection of NVEF knowledge as an NVEF word-pair (a meaningful NV word-pair) and its corresponding NVEF sense-pairs. No methods exist that can fully and automatically find collections of NVEF knowledge from Chinese sentences. We propose a method here for automatically acquiring large-scale NVEF knowledge without human intervention in order to identify a large, varied range of NVEF-sentences (sentences containing at least one NVEF word-pair). The auto-generation of NVEF knowledge (AUTO-NVEF) includes four major processes: (1) segmentation checking; (2) Initial Part-of-Speech (IPOS) sequence generation; (3) NV knowledge generation; and (4) NVEF knowledge auto-confirmation. Our experimental results show that AUTO-NVEF achieved 98.52% accuracy for news and 96.41% for specific text types, which included research reports, classical literature and modern literature. AUTO-NVEF automatically discovered over 400,000 NVEF word-pairs from the 2001 United Daily News (2001 UDN) corpus. According to our estimation, the acquired NVEF knowledge from 2001 UDN helped to identify 54% of the NVEF-sentences in the Academia Sinica Balanced Corpus (ASBC), and 60% in the 2001 UDN corpus.", "pdf_parse": { "paper_id": "O04-2003", "_pdf_hash": "", "abstract": [ { "text": "Noun-verb event frame (NVEF) knowledge in conjunction with an NVEF word-pair identifier [Tsai et al. 2002] comprises a system that can be used to support natural language processing (NLP) and natural language understanding (NLU). In [Tsai et al. 2002a], we demonstrated that NVEF knowledge can be used effectively to solve the Chinese word-sense disambiguation (WSD) problem with 93.7% accuracy for nouns and verbs. In [Tsai et al. 2002b], we showed that NVEF knowledge can be applied to the Chinese syllable-to-word (STW) conversion problem to achieve 99.66% accuracy for the NVEF related portions of Chinese sentences. In [Tsai et al. 2002a], we defined a collection of NVEF knowledge as an NVEF word-pair (a meaningful NV word-pair) and its corresponding NVEF sense-pairs. No methods exist that can fully and automatically find collections of NVEF knowledge from Chinese sentences. We propose a method here for automatically acquiring large-scale NVEF knowledge without human intervention in order to identify a large, varied range of NVEF-sentences (sentences containing at least one NVEF word-pair). The auto-generation of NVEF knowledge (AUTO-NVEF) includes four major processes: (1) segmentation checking; (2) Initial Part-of-Speech (IPOS) sequence generation; (3) NV knowledge generation; and (4) NVEF knowledge auto-confirmation. Our experimental results show that AUTO-NVEF achieved 98.52% accuracy for news and 96.41% for specific text types, which included research reports, classical literature and modern literature. AUTO-NVEF automatically discovered over 400,000 NVEF word-pairs from the 2001 United Daily News (2001 UDN) corpus. According to our estimation, the acquired NVEF knowledge from 2001 UDN helped to identify 54% of the NVEF-sentences in the Academia Sinica Balanced Corpus (ASBC), and 60% in the 2001 UDN corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The most challenging problem in natural language processing (NLP) is programming computers to understand natural languages. For humans, efficient syllable-to-word (STW) conversion and word sense disambiguation (WSD) occur naturally when a sentence is understood. In a natural language understanding (NLU) system is designed, methods that enable consistent STW and WSD are critical but difficult to attain. For most languages, a sentence is a grammatical organization of words expressing a complete thought [Chu 1982; Fromkin et al. 1998 ]. Since a word is usually encoded with multiple senses, to understand language, efficient word sense disambiguation (WSD) is critical for an NLU system. As found in a study on cognitive science [Choueka et al. 1983] , people often disambiguate word sense using only a few other words in a given context (frequently only one additional word). That is, the relationship between a word and each of the others in the sentence can be used effectively to resolve ambiguity. From [Small et al. 1988; Krovetz et al. 1992; Resnik et al. 2000] , most ambiguities occur with nouns and verbs. Object-event (i.e., noun-verb) distinction is the most prominent ontological distinction for humans [Carey 1992 ]. Tsai et al. [2002a] showed that knowledge of meaningful noun-verb (NV) word-pairs and their corresponding sense-pairs in conjunction with an NVEF word-pair identifier can be used to achieve a WSD accuracy rate of 93.7% for NV-sentences (sentences that contain at least one noun and one verb).", "cite_spans": [ { "start": 506, "end": 516, "text": "[Chu 1982;", "ref_id": "BIBREF5" }, { "start": 517, "end": 536, "text": "Fromkin et al. 1998", "ref_id": "BIBREF13" }, { "start": 732, "end": 753, "text": "[Choueka et al. 1983]", "ref_id": null }, { "start": 1011, "end": 1030, "text": "[Small et al. 1988;", "ref_id": "BIBREF27" }, { "start": 1031, "end": 1051, "text": "Krovetz et al. 1992;", "ref_id": "BIBREF18" }, { "start": 1052, "end": 1071, "text": "Resnik et al. 2000]", "ref_id": "BIBREF24" }, { "start": 1219, "end": 1230, "text": "[Carey 1992", "ref_id": "BIBREF1" }, { "start": 1234, "end": 1253, "text": "Tsai et al. [2002a]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "According to [\u80e1\u88d5\u6a39 et al. 1995; \u9673\u514b\u5065 et al. 1996; Fromkin et al. 1998; \u6731\u66c9\u4e9e 2001; \u9673\u660c\uf92d 2002; \uf9c7\u9806 2003 ], the most important content word relationship in sentences is the noun-verb construction. For most languages, subject-predicate (SP) and verb-object (VO) are the two most common NV constructions (or meaningful NV word-pairs). In Chinese, SP and VO constructions can be found in three language units: compounds, phrases and sentences [Li et al. 1997] . Modifier-head (MH) and verb-complement (VC) are two other meaningful NV word-pairs which are only found in phrases and compounds. Consider the meaningful NV word-pair \u6c7d\uf902-\u9032\u53e3(car, import). It is an MH construction in the Chinese compound \u9032\u53e3\u6c7d \uf902(import car) and a VO construction in the Chinese phrase \u9032\u53e3\u8a31\u591a\u6c7d\uf902(import many cars).", "cite_spans": [ { "start": 13, "end": 30, "text": "[\u80e1\u88d5\u6a39 et al. 1995;", "ref_id": null }, { "start": 31, "end": 47, "text": "\u9673\u514b\u5065 et al. 1996;", "ref_id": null }, { "start": 48, "end": 68, "text": "Fromkin et al. 1998;", "ref_id": "BIBREF13" }, { "start": 69, "end": 78, "text": "\u6731\u66c9\u4e9e 2001;", "ref_id": null }, { "start": 79, "end": 88, "text": "\u9673\u660c\uf92d 2002;", "ref_id": null }, { "start": 89, "end": 96, "text": "\uf9c7\u9806 2003", "ref_id": null }, { "start": 432, "end": 448, "text": "[Li et al. 1997]", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In [Tsai et al. 2002a] , we called a meaningful NV word-pair a noun-verb event frame (NVEF) word-pair. Combining the NV word-pair \u6c7d\uf902-\u9032\u53e3 and its sense-pair Car-Import creates a collection of NVEF knowledge. Since a complete event frame usually contains a predicate and its arguments, an NVEF word-pair can be a full or a partial event frame construction.", "cite_spans": [ { "start": 3, "end": 22, "text": "[Tsai et al. 2002a]", "ref_id": null }, { "start": 85, "end": 91, "text": "(NVEF)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In Chinese, syllable-to-word entry is the most popular input method. Since the average number of characters sharing the same phoneme is 17, efficient STW conversion has become an indispensable tool. In [Tsai et al. 2002b] , we showed that NVEF knowledge can be used to achieve an STW accuracy rate of 99.66% for converting NVEF related words in Chinese. We proposed a method for the semi-automatic generation of NVEF knowledge in [Tsai et al. 2002a] . This method uses the NV frequencies in sentences groups to generate NVEF candidates to be filtered by human editors. This process becomes labor-intensive when a large amount of NVEF knowledge is created. To our knowledge, no methods exist that can be used to fully auto-extract a large amount of NVEF knowledge from Chinese text. In the literature, most methods for auto-extracting Verb-Noun collections (i.e., meaningful NV word-pairs) focus on English [Benson et al. 1986; Church et al. 1990; Smadja 1993; Smadja et al. 1996; Lin 1998; Huang et al. 2000; Jian 2003 ]. However, the issue of VN collections focuses on extracting meaningful NV word-pairs, not NVEF knowledge. In this paper, we propose a new method that automatically generates NVEF knowledge from running texts and constructs a large amount of NVEF knowledge.", "cite_spans": [ { "start": 202, "end": 221, "text": "[Tsai et al. 2002b]", "ref_id": null }, { "start": 430, "end": 449, "text": "[Tsai et al. 2002a]", "ref_id": null }, { "start": 906, "end": 926, "text": "[Benson et al. 1986;", "ref_id": "BIBREF0" }, { "start": 927, "end": 946, "text": "Church et al. 1990;", "ref_id": "BIBREF7" }, { "start": 947, "end": 959, "text": "Smadja 1993;", "ref_id": null }, { "start": 960, "end": 979, "text": "Smadja et al. 1996;", "ref_id": null }, { "start": 980, "end": 989, "text": "Lin 1998;", "ref_id": "BIBREF21" }, { "start": 990, "end": 1008, "text": "Huang et al. 2000;", "ref_id": "BIBREF14" }, { "start": 1009, "end": 1018, "text": "Jian 2003", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This paper is arranged as follows. In section 2, we describe in detail the auto-generation of NVEF knowledge. Experiment results and analyses are given in section 3. Conclusions are drawn and future research ideas discussed in section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "2. Development of a Method for NVEF Knowledge Auto-GenerationFor our auto-generate NVEF knowledge (AUTO-NVEF) system, we use HowNet 1.0 [Dong 1999 ] as a system dictionary. This system dictionary provides 58,541 Chinese words and their corresponding parts-of-speech (POS) and word senses (called DEF in HowNet). Contained in this dictionary are 33,264 nouns and 16,723 verbs, as well as 16,469 senses comprised of 10,011 noun-senses and 4,462 verb-senses.", "cite_spans": [ { "start": 136, "end": 146, "text": "[Dong 1999", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Since 1999, HowNet has become one of widely used Chinese-English bilingual knowledge-base dictionaries for Chinese NLP research. Machine translation (MT) is a typical application of HowNet. The interesting issues related to (1) the overall picture of HowNet, (2) comparisons between HowNet [Dong 1999] , WordNet [Miller 1990; Fellbaum 1998 ], Suggested Upper Merged Ontology (SUMO) [Niles et al. 2001; Subrata et al. 2002; Chung et al. 2003 ] and VerbNet [Dang et al. 2000; Kipper et al. 2000] and (3) typical applications of HowNet can be found in the 2nd tutorial of IJCNLP-04 [Dong 2004 ].", "cite_spans": [ { "start": 290, "end": 301, "text": "[Dong 1999]", "ref_id": null }, { "start": 312, "end": 325, "text": "[Miller 1990;", "ref_id": "BIBREF22" }, { "start": 326, "end": 339, "text": "Fellbaum 1998", "ref_id": "BIBREF12" }, { "start": 382, "end": 401, "text": "[Niles et al. 2001;", "ref_id": "BIBREF23" }, { "start": 402, "end": 422, "text": "Subrata et al. 2002;", "ref_id": "BIBREF28" }, { "start": 423, "end": 440, "text": "Chung et al. 2003", "ref_id": "BIBREF6" }, { "start": 455, "end": 473, "text": "[Dang et al. 2000;", "ref_id": "BIBREF9" }, { "start": 474, "end": 493, "text": "Kipper et al. 2000]", "ref_id": "BIBREF17" }, { "start": 579, "end": 589, "text": "[Dong 2004", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The sense of a word is defined as its definition of concept (DEF) in HowNet. Table 1 lists three different senses of the Chinese word \uf902(Che[surname]/car/turn). In HowNet, the DEF of a word consists of its main feature and all secondary features. For example, in the DEF \"character|\u6587\u5b57,surname|\u59d3,human|\u4eba,ProperName|\u5c08\" of the word \uf902(Che[surname]), the first item \"character|\u6587\u5b57\" is the main feature, and the remaining three items, surname|\u59d3, human|\u4eba, and ProperName|\u5c08, are its secondary features. The main feature in HowNet inherits features from the hypernym-hyponym hierarchy. There are approximately 1,500 such features in HowNet. Each one is called a sememe, which refers to the smallest semantic unit that cannot be reduced. As previously mentioned, a meaningful NV word-pair is a noun-verb event-frame word-pair (NVEF word-pair), such as \uf902 -\ufa08\u99db(Che[surname]/car/turn, move). In a sentence, an NVEF word-pair can take an SP or a VO construction; in a phrase/compound, an NVEF word-pair can take an SP, a VO, an MH or a VC construction. From Table 1 , the only meaningful NV sense-pair for \uf902 -\ufa08\u99db(car, move) is LandVehicle|\uf902 -VehicleGo|\u99db. Here, combining the NVEF sense-pair LandVehicle|\uf902 -VehicleGo|\u99db and the NVEF word-pair \uf902 -\ufa08\u99db creates a collection of NVEF knowledge.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 84, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1041, "end": 1048, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Definition of NVEF Knowledge", "sec_num": "2.1" }, { "text": "To effectively represent NVEF knowledge, we have proposed an NVEF knowledge representation tree (NVEF KR-tree) that can be used to store, edit and browse acquired NVEF knowledge. The details of the NVEF KR-tree given below are taken from [Tsai et al. 2002a ].", "cite_spans": [ { "start": 238, "end": 256, "text": "[Tsai et al. 2002a", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Representation Tree for NVEF Knowledge", "sec_num": "2.2" }, { "text": "The two types of nodes in the KR-tree are function nodes and concept nodes. Concept nodes refer to words and senses (DEF) of NVEF knowledge. Function nodes define the relationships between the parent and children concept nodes. According to each main feature of noun senses in HowNet, we can classify noun senses into fifteen subclasses. These subclasses are \u5fae\u751f\u7269(bacteria), \u52d5\u7269\uf9d0(animal), \u4eba\u7269\uf9d0(human), \u690d\u7269\uf9d0(plant), \u4eba\u5de5\u7269(artifact), \u5929 \u7136\u7269(natural), \u4e8b\u4ef6\uf9d0(event), \u7cbe\u795e\uf9d0(mental), \u73fe\u8c61\uf9d0(phenomena), \u7269\u5f62\uf9d0(shape), \u5730\u9ede\uf9d0 (place), \u4f4d\u7f6e\uf9d0(location), \u6642\u9593\uf9d0(time), \u62bd\u8c61\uf9d0(abstract) and \uf969\uf97e\uf9d0(quantity). Appendix A provides a table of the fifteen main noun features in each noun-sense subclass.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Representation Tree for NVEF Knowledge", "sec_num": "2.2" }, { "text": "As shown in Figure 1 , the three function nodes that can be used to construct a collection of NVEF knowledge (LandVehicle|\uf902-VehcileGo|\u99db) are as follows:", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Knowledge Representation Tree for NVEF Knowledge", "sec_num": "2.2" }, { "text": "( (2) Word Instance (\u5be6\u4f8b): The contents of word instance children consist of words belonging to the sense subclass of their parent node. These words are self-learned through the sentences located under the Test-Sentence nodes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Representation Tree for NVEF Knowledge", "sec_num": "2.2" }, { "text": "(3) Test Sentence (\u6e2c\u8a66\u984c): The contents of test sentence children consist of the selected test NV-sentence that provides a language context for its corresponding NVEF knowledge. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Representation Tree for NVEF Knowledge", "sec_num": "2.2" }, { "text": "AUTO-NVEF automatically discovers meaningful NVEF sense/word-pairs (NVEF knowledge) in Chinese sentences. Figure 2 shows the AUTO-NVEF flow chart. There are four major processes in AUTO-NVEF. These processes are shown in Figure 2 , and Table 2 shows a step by step example. A detailed description of each process is provided in the following. ", "cite_spans": [], "ref_spans": [ { "start": 106, "end": 114, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 221, "end": 229, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 236, "end": 243, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Auto-Generation of NVEF Knowledge", "sec_num": "2.3" }, { "text": "In this stage, a Chinese sentence is segmented according to two strategies: forward (left-to-right) longest word first and backward (left-to-right) longest word first. From [Chen et al. 1986] , the \"longest syllabic word first strategy\" is effective for Chinese word segmentation. If both forward and backward segmentations are equal (for-ward=backward) and the word number of the segmentation is greater than one, then this segmentation result will be sent to process 2; otherwise, a NULL segmentation will be sent. Table 3 shows a comparison of the word-segmentation accuracy for forward, backward and for-ward=backward strategies using the Chinese Knowledge Information Processing (CKIP) lexicon [CKIP 1995] . The word segmentation accuracy is the ratio of the correctly segmented sentences to all the sentences in the Academia Sinica Balancing Corpus (ASBC) [CKIP 1996] . A correctly segmented sentence means the segmented result exactly matches its corresponding segmentation in ASBC. Table 3 shows that the forward=backward technique achieves the best word segmentation accuracy.", "cite_spans": [ { "start": 173, "end": 191, "text": "[Chen et al. 1986]", "ref_id": "BIBREF3" }, { "start": 699, "end": 710, "text": "[CKIP 1995]", "ref_id": null }, { "start": 862, "end": 873, "text": "[CKIP 1996]", "ref_id": null } ], "ref_spans": [ { "start": 517, "end": 524, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 990, "end": 997, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Process 1. Segmentation checking:", "sec_num": null }, { "text": "\u5165\u8a31\u591a\u89c0\u773e(There are many audience members entering the locale of the concert). The English words in parentheses are included for explanatory purposes only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "Process Output", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "(1) \u97f3\u6a02\u6703(concert)/\u73fe\u5834(locale)/\u6e67\u5165(enter)/\u8a31\u591a(many)/\u89c0\u773e(audience members) (3) Process 2. Initial POS sequence generation: This process will be triggered if the output of process 1 is not a NULL segmentation. It is comprised of the following steps. 1) For segmentation result w 1 /w 2 /\u2026/w n-1 /w n from process 1, our algorithm computes the POS of w i , where i = 2 to n. Then, it computes the following two sets: a) the following POS/frequency set of w i-1 according to ASBC and b) the HowNet POS set of w i . It then computes the POS intersection of the two sets. Finally, it selects the POS with the highest frequency in the POS intersection as the POS of w i . If there is zero or more than one POS with the highest frequency, the POS of w i will be set to NULL POS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "(2) N 1 N 2 V 3 ADJ 4 N 5 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "NV1 = \u73fe\u5834/place|\u5730\u65b9,#fact|\u4e8b\u60c5/N -\u6e67\u5165(yong3 ru4)/GoInto|\u9032\u5165/V NV2 = \u89c0\u773e/human|\u4eba,*look|\u770b,#entertainment|\u85dd,#sport|\u9ad4\u80b2,*recreation|\u5a1b\uf914/N -\u6e67\u5165(yong3 ru4)/GoInto|\u9032\u5165/V", "eq_num": "(" } ], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "2) For the POS of w 1 , it selects the POS with the highest frequency in the POS intersection of the preceding POS/frequency set of w 2 and the HowNet POS set of w 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "3) After combining the determined POSs of w i obtained in first two steps, it then generates the initial POS sequence (IPOS). Take the Chinese segmentation \u751f/\u4e86 as an example. The following POS/frequency set of the Chinese word \u751f(to bear) is {N/103, PREP/42, STRU/36, V/35, ADV/16, CONJ/10, ECHO/9, ADJ/1}(see Table 4 for tags defined in HowNet). The HowNet POS set of the Chinese word \u4e86(a Chinese satisfaction indicator) is {V, STRU}. According to these sets, we have the POS intersection {STRU/36, V/35}. Since the POS with the highest frequency in this intersection is STRU, the POS of \u4e86 will be set to STRU. Similarly, according to the intersection {V/16124, N/1321, ADJ/4} of the preceding POS/frequency set {V/16124, N/1321, PREP/1232, ECHO/121, ADV/58, STRU/26, CONJ/4, ADJ/4} of \u4e86 and the HowNet POS set {V, N, ADJ} of \u751f, the POS of \u751fwill be set to V. Table 4 shows a mapping list of CKIP POS tags and HowNet POS tags. Process 3. NV knowledge generation: This process will be triggered if the IPOS output of process 2 does not include any NULL POS. The steps in this process are given as follows.", "cite_spans": [], "ref_spans": [ { "start": 309, "end": 316, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 859, "end": 866, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "1) Compute the final POS sequence (FPOS). This step translates an IPOS into an FPOS. For each continuous noun sequence of IPOS, the last noun will be kept, and the other nouns will be dropped. This is because a contiguous noun sequence in Chinese is usually a compound, and its head is the last noun. Take the Chinese sentence \u97f3\u6a02\u6703(N 1 )\u73fe\u5834(N 2 )\u6e67\u5165(V 3 )\u8a31\u591a (ADJ 4 )\u89c0\u773e(N 5 ) and its IPOS N 1 N 2 V 3 ADJ 4 N 5 as an example. Since it has a continuous noun sequence\u97f3\u6a02\u6703(N 1 )\u73fe\u5834(N 2 ), the IPOS will be translated into FPOS", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "N 1 V 2 ADJ 3 N 4 , where N 1 =\u73fe\u5834, V 2 =\u6e67\u5165, ADJ 3 =\u8a31\u591aand N 4 =\u89c0\u773e. 2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "Generate NV word-pairs. According to the FPOS mappings and their corresponding NV word-pairs (see Appendix B), AUTO-NVEF generates NV word-pairs. In this study, we created more than one hundred FPOS mappings and their corresponding NV word-pairs. Consider the above mentioned Process 4. NVEF knowledge auto-confirmation: In this stage, AUTO-NVEF automatically confirms whether the generated NV knowledge is or is not NVEF knowledge. The two auto-confirmation procedures are described in the following.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "FPOS N 1 V 2 ADJ 3 N 4 , where N 1 =\u73fe\u5834, V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "(a) NVEF accepting condition (NVEF-AC) checking: Each NVEF accepting condition is constructed using a noun-sense class (such as \u4eba\u7269\u985e[human]) defined in [Tsai et al. 2002a ] and a verb main feature (such as GoInto|\u9032\u5165) defined in HowNet [Dong 1999 ]. In [Tsai et al. 2002b] , we created 4,670 NVEF accepting conditions from manually confirmed NVEF knowledge. In this procedure, if the noun-sense class and the verb main feature of the generated NV knowledge can satisfy at least one NVEF accepting condition, then the generated NV knowledge will be auto-confirmed as NVEF knowledge and will be sent to the NVEF KR-tree. Appendix C lists the ten NVEF accepting conditions used in this study.", "cite_spans": [ { "start": 151, "end": 169, "text": "[Tsai et al. 2002a", "ref_id": null }, { "start": 234, "end": 244, "text": "[Dong 1999", "ref_id": null }, { "start": 251, "end": 270, "text": "[Tsai et al. 2002b]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "(b) NVEF enclosed-word template (NVEF-EW template) checking: If the generated NV knowledge cannot be auto-confirmed as NVEF knowledge in procedure (a), this procedure will be triggered. An NVEF-EW template is composed of all the left side words and right side words of an NVEF word-pair in a Chinese sentence. For example, the NVEF-EW template of the NVEF word-pair \u6c7d\u8eca-\u884c\u99db(car, move) in the Chinese sentence \u9019(this)/\u6c7d\u8eca(car)/\u4f3c\u4e4e(seem)/\u884c\u99db(move)/\u9806\u66a2(well) is \u9019 N \u4f3c\u4e4e V \u9806\u66a2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "In this study, all NVEF-EW templates were auto-generated from: 1) the collection of manually confirmed NVEF knowledge in , 2) the on-line collection of NVEF knowledge automatically confirmed by AUTO-NVEF and 3) the manually created NVEF-EW templates. In this procedure, if the NVEF-EW template of a generated NV word-pair matches at least one NVEF-EW template, then the NV knowledge will be auto-confirmed as NVEF knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. An illustration of AUTO-NVEF for the Chinese sentence \u97f3\uf914\u6703\u73fe\u5834\u6e67", "sec_num": null }, { "text": "To evaluate the performance of the proposed approach to the auto-generation of NVEF knowledge, we define the NVEF accuracy and NVEF-identified sentence ratio according to Equations (1) and (2), respectively:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3." }, { "text": "NVEF accuracy = # of meaningful NVEF knowledge / # of total generated NVEF knowledge;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3." }, { "text": "(1) NVEF-identified sentence ratio =# of NVEF-identified sentences / # of total NVEF-sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3." }, { "text": "In Equation 1, meaningful NVEF knowledge means that the generated NVEF knowledge has been manually confirmed to be a collection of NVEF knowledge. In Equation 2, if a Chinese sentence can be identified as having at least one NVEF word-pair by means of the generated NVEF knowledge in conjunction with the NVEF word-pair identifier proposed in [Tsai et al. 2002a] , this sentence is called an NVEF-identified sentence. If a Chinese sentence contains at least one NVEF word-pair, it is called an NVEF-sentence. We estimate that about 70% of the Chinese sentences in ASBC are NVEF-sentences. ted NVEF nowledge.", "cite_spans": [ { "start": 343, "end": 362, "text": "[Tsai et al. 2002a]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3." }, { "text": "A user interface that manually confirms generated NVEF knowledge is shown in Figure 3 . With it, evaluators (native Chinese speakers) can review generated NVEF knowledge and determine whether or not it is meaningful NVEF knowledge. Take the Chinese sentence \u9ad8\ufa01 \u58d3\uf98a(High pressure)\u4f7f(make)\u6709\u4e9b(some)\u4eba(people)\u98df\uf97e(eating capacity)\u6e1b\u5c11(decrease) as an example. AUTO-NVEF will generate an NVEF knowledge collection that includes the ", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "User Interface for Manually Confirming NVEF Knowledge", "sec_num": "3.1" }, { "text": "Auto-generated NVEF knowledge can be confirmed as meaningful NVEF knowledge if it satisfies all three of the following principles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principles for Confirming Meaningful NVEF Knowledge", "sec_num": "3.2" }, { "text": "Principle 1. The NV word-pair produces correct noun(N) and verb(V) POS tags for the given Chinese sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principles for Confirming Meaningful NVEF Knowledge", "sec_num": "3.2" }, { "text": "Principle 2. The NV sense-pair and the NV word-pair make sense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principles for Confirming Meaningful NVEF Knowledge", "sec_num": "3.2" }, { "text": "Principle 3. Most of the inherited NV word-pairs of the NV sense-pair satisfy Principles 1 and 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principles for Confirming Meaningful NVEF Knowledge", "sec_num": "3.2" }, { "text": ". et.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principles for Confirming Meaningful NVEF Knowledge", "sec_num": "3.2" }, { "text": "For our experiment, we used two corpora. All the NVEF knowledge acquired by AUTO-NVEF from the testing corpora was manually confirmed by evaluators. Tables 5a and 5b show the experiment results. These tables show that our AUTO-NVEF achieved 98.52% NVEF accuracy for news and 96.41% for specific text types. When we applied AUTO-NVEF to the entire 2001 UDN corpus, it auto-generated 173,744 NVEF sense-pairs (8.8M) and 430,707 NVEF word-pairs (14.1M). Within this data, 51% of the NVEF knowledge were generated based on NVEF accepting conditions (human-editing knowledge), and 49% were generated based on NVEF-enclosed word templates (machine-learning knowledge). Tables 5a and 5b show that the average accuracy of NVEF knowledge generated by NVEF-AC and NVEF-EW for news and specific texts reached 98.71% and 97.00%, respectively. These results indicate that our AUTO-NVEF has the ability to simultaneously maintain high precision and extend NVEF-EW knowledge, similar to the snowball effect, and to generate a large amount of NVEF knowledge without human intervention. The results also suggest that the best method to overcome the Precision-Recall Tradeoff problem for NLP is based on linguistic knowledge and statistical constraints, i.e., hybrid approach [Huang et al. 1996; Tsai et al. 2003 ].", "cite_spans": [ { "start": 1258, "end": 1277, "text": "[Huang et al. 1996;", "ref_id": "BIBREF15" }, { "start": 1278, "end": 1294, "text": "Tsai et al. 2003", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 663, "end": 679, "text": "Tables 5a and 5b", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Experiment Results", "sec_num": "3.3" }, { "text": "From the noun and verb positions of NVEF word-pairs in Chinese sentences, NVEF knowledge can be classified into four NV-position types: N:V, N-V, V:N and V-N, where : means next to and -means nearby. Table 6a shows examples and the percentages of the four NV-position types of generated NVEF knowledge. The ratios (percentages) of the collections of N:V, N-V, V:N and V-N are 12.41%, 43.83% 19.61% and 24.15%, respectively. Table 6a shows that an NVEF word-pair, such as \u5de5\u7a0b-\u5b8c\u6210(Construction, Complete), can be an N:V, N-V, V:N or V-N in sentences. For our generated NVEF knowledge, the maximum and average number of characters between nouns and verbs in generated NVEF knowledge are 27 and 3, respective Based on the numbers of noun and verb characters in NVEF word-pairs, we classify NVEF knowledge into four NV-word-length types: N1V1, N1V2+, N2+V1 and N2+V2+, where N1 and V1 mean single-character nouns and verbs, respectively; N2+ and V2+ mean multi-character nouns and verbs. Table 6b shows examples and the percentages of the four NV-word-length types of manually created NVEF knowledge for 1,000 randomly selected ASBC sentences. From the manually created NVEF knowledge, we estimate that the percentages of the collections of N1V1, N1V2+, N2+V1 and N2+V2+ NVEF word-pairs are 6.4%, 6.8%, 22.2% and 64.6%, respectively. According to this NVEF knowledge, we estimate that the auto-generated NVEF Knowledge (for 2001 UDN) in conjunction with the NVEF word-pair identifier can be used to identify 54% of the NVEF-sentences in ASBC. Table 6c shows the Top 5 single-character verbs in N1V1 and N2+V1 NVEF word-pairs and their percentages. Table 6d shows the Top 5 multi-character verbs in N1V2+ and N2+V2+ NVEF word-pairs and their percentages. From Table 6c , the percentages of N2+\u662f and N2+\u6709 NVEF word-pairs are both greater than those of other single-character verbs. Thus, the N2+\u662f and N2+\u6709 NVEF knowledge was worthy to being considered in our AUTO-NVEF. On the other hand, we found that 3.2% of the NVEF-sentences (or 2.3% of the ASBC sentences) were N1V1-only sentences, where an N1V1-only sentence is a sentence that only has one N1V1-NVEF word-pair. For example, the Chinese sentence \u4ed6(he)\uf96f(say)\u904e\uf9ba(already) is an N1V1-only sentence because it has only one N1V1-NVEF word-pair: \u4ed6-\uf96f(he, say). Since 1N1V1-NVEF knowledge is not critical for our NVEF-based applications and (2) auto-generating N1V1 NVEF knowledge is very difficult, the auto-generation of N1V1-NVEF knowledge was not considered in our AUTO-NVEF. In fact, according to the system dictionary, the maximum and average word-sense numbers of single-character were 27 and 2.2, respectively, and those of multi-character words were 14 and 1.1, respectively. ", "cite_spans": [], "ref_spans": [ { "start": 200, "end": 208, "text": "Table 6a", "ref_id": "TABREF10" }, { "start": 424, "end": 432, "text": "Table 6a", "ref_id": "TABREF10" }, { "start": 981, "end": 989, "text": "Table 6b", "ref_id": "TABREF11" }, { "start": 1536, "end": 1544, "text": "Table 6c", "ref_id": "TABREF12" }, { "start": 1641, "end": 1649, "text": "Table 6d", "ref_id": "TABREF13" }, { "start": 1752, "end": 1760, "text": "Table 6c", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Analysis and Classification of NVEF Knowledge", "sec_num": "3.3.1" }, { "text": "One hundred collections of manually confirmed non-meaningful NVEF (NM-NVEF) knowledge from the experiment results were analyzed. We classified them according to eleven error types, as shown in Table 7 , which lists the NM-NVEF confirmation principles and the percentages for the eleven error types. The first three types comprised 52% of the NM-NVEF cases that did not satisfy NVEF confirmation principles 1, 2 and 3. The fourth type was rare, representing only 1% of the NM-NVEF cases. Type 5, 6 and 7 errors comprised 11% of the NM-NVEF cases and were caused by HowNet lexicon errors, such as the incorrect DEF (word-sense) exist|\u5b58\u5728 for the Chinese word \u76c8\u76c8 (an adjective, normally used to describe someone's beautiful smile). Type 8, 9, 10 and 11 errors are referred to as four NLP errors and comprised 36% of the NM-NVEF cases. Type 8 errors were caused by the different word-senses used in Old and Modern Chinese; Type 9 errors were caused by errors in WSD; Type 10 errors were caused by the unknown word problem; and Type 11 errors were caused by incorrect word segmentation. Table 8 gives examples for each type of NP-NVEF knowledge. From Table 7 , 11% of the NM-NVEF cases could be resolved by correcting the lexicon errors in HowNet [Dong 1999 ]. The four types of NLP errors that caused 36% of the NM-NVEF cases could be eliminated by using other techniques such as WSD ( [Resnik et al. 2000; Yang et al. 2002] ), unknown word identification ([Chang et al. 1997; Lai et al. 2000; Chen et al. 2002; Sun et al. 2002; ]) or word segmentation ([Sproat et al. 1996; Teahan et al. 2000] ). ", "cite_spans": [ { "start": 1241, "end": 1251, "text": "[Dong 1999", "ref_id": null }, { "start": 1381, "end": 1401, "text": "[Resnik et al. 2000;", "ref_id": "BIBREF24" }, { "start": 1402, "end": 1419, "text": "Yang et al. 2002]", "ref_id": "BIBREF37" }, { "start": 1451, "end": 1471, "text": "([Chang et al. 1997;", "ref_id": null }, { "start": 1472, "end": 1488, "text": "Lai et al. 2000;", "ref_id": "BIBREF19" }, { "start": 1489, "end": 1506, "text": "Chen et al. 2002;", "ref_id": "BIBREF4" }, { "start": 1507, "end": 1523, "text": "Sun et al. 2002;", "ref_id": "BIBREF29" }, { "start": 1548, "end": 1569, "text": "([Sproat et al. 1996;", "ref_id": "BIBREF30" }, { "start": 1570, "end": 1589, "text": "Teahan et al. 2000]", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 193, "end": 200, "text": "Table 7", "ref_id": "TABREF14" }, { "start": 1081, "end": 1088, "text": "Table 8", "ref_id": "TABREF15" }, { "start": 1145, "end": 1152, "text": "Table 7", "ref_id": "TABREF14" } ], "eq_spans": [], "section": "Error Analysis -Non-Meaningful NVEF Knowledge Generated by AUTO-NVEF", "sec_num": "3.3.2" }, { "text": "In this paper, we have presented an auto-generation system for NVEF knowledge (AUTO-NVEF) that fully and automatically discovers and constructs a large amount of NVEF knowledge for NLP and NLU systems. AUTO-NVEF uses both human-editing knowledge (HowNet conceptual constraints) and machine-learning knowledge (word-context patterns). Experimental results show that AUTO-NVEF achieves 98.52% accuracy for news and 96.41% accuracy for specific text types. The average number of characters between nouns and verbs in NVEF knowledge is 3. Since only 2.3% of the sentences in ASBC are N1V1-only sentences, N1V1 NVEF knowledge should not be a critical issue for NVEF-based applications. From our experimental results, neither word-segmentation nor POS tagging are critical issues for our AUTO-NVEF. The critical problems, about 60% of the error cases, were caused by failed word-sense disambiguation (WSD) and HowNet lexicon errors. Therefore, AUTO-NVEF using conventional maximum matching word-segmentation and bi-grams like POS tagging algorithms was able to achieve more than 98% accuracy for news. By applying AUTO-NVEF to the 2001 UDN corpus, we created 173,744 NVEF sense-pairs (8.8M) and 430,707 NVEF word-pairs (14.1M) in an NVEF-KR tree. Using this collection of NVEF knowledge and an NVEF word-pair identifier , we achieved a WSD accuracy rate of 93.7% and a STW accuracy rate of 99.66% for the NVEF related portions of Chinese sentences. To sum up of the experimental results in and [Wu et al. 2003a; Wu et al. 2003b] , NVEF knowledge was investigated and shown to be useful for WSD, STW, domain event extraction, domain ontology generation and text categorization.", "cite_spans": [ { "start": 1488, "end": 1505, "text": "[Wu et al. 2003a;", "ref_id": null }, { "start": 1506, "end": 1522, "text": "Wu et al. 2003b]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Directions for Future Research", "sec_num": "4." }, { "text": "According to our estimation, the auto-acquired NVEF knowledge from the 2001 UDN corpus combined with the NVEF word-pair identifier ] could be used to identify 54% and 60% of the NVEF-sentences in ASBC and in the 2001 UDN corpus, respectively. Since 94.73% (9,345/9,865) of the nouns in the most frequent 60,000 CKIP lexicon are contained in NVEF knowledge constructions, the auto-generated NVEF knowledge can be an acceptably large amount of NVEF knowledge for NLP/NLU systems. We found that the remaining 51.16% (5,122/10,011) of the noun-senses in HowNet were caused by two problems. One was that words with multiple noun-senses or multiple verb-senses, which are not easily resolved by WSD (for example, fully-automatic machine learning techniques), especially for single-character words. In our system dictionary, the maximum and average word-sense numbers of single-character words are 27 and 2.2, respectively. The other problem was corpus sparsness. We will continue expanding our NVEF knowledge through other corpora so that we can identify more than 75% of the NVEF-sentences in ASBC. AUTO-NVEF will be extended to auto-generate other meaningful content word constructions, in particular, meaningful noun-noun, noun-adjective and verb-adverb word-pairs. In addition, we will investigate the effectiveness of NVEF knowledge in other NLP and NLU applications, such as syllable and speech understanding as well as full and shallow parsing. In [\u8463\u632f\u6771 1998; Jian 2003; Dong 2004] , it was shown that the knowledge in bilingual Verb-Noun (VN) grammatical collections, i.e., NVEF word-pairs, is critically important for machine translation (MT). This motivates further work on the auto-generation of bilingual, especially Chinese-English, NVEF knowledge to support MT research. ", "cite_spans": [ { "start": 1449, "end": 1459, "text": "[\u8463\u632f\u6771 1998;", "ref_id": null }, { "start": 1460, "end": 1470, "text": "Jian 2003;", "ref_id": "BIBREF16" }, { "start": 1471, "end": 1481, "text": "Dong 2004]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Directions for Future Research", "sec_num": "4." } ], "back_matter": [ { "text": "We are grateful to our colleagues in the Intelligent Agent Systems Laboratory (IASL): Li-Yeng Chiu, Mark Shia, Gladys Hsieh, Masia Yu, Yi-Fan Chang, Jeng-Woei Su and Win-wei Mai, who helped us create and verify all the NVEF knowledge and tools for this study. We would also like to thank Professor Zhen-Dong Dong for providing the HowNet dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "Appendix C. Ten Examples of NVEF accepting Conditions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The BBI Combination Dictionary of English: A Guide to Word Combination", "authors": [ { "first": "M", "middle": [], "last": "Benson", "suffix": "" }, { "first": "E", "middle": [], "last": "Benson", "suffix": "" }, { "first": "R", "middle": [], "last": "Ilson", "suffix": "" } ], "year": 1986, "venue": "John Benjamins", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benson, M., E. Benson, and R. Ilson, The BBI Combination Dictionary of English: A Guide to Word Combination, John Benjamins, Amsterdam, Netherlands, 1986.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The origin and evolution of everyday concepts", "authors": [ { "first": "S", "middle": [], "last": "Carey", "suffix": "" } ], "year": 1992, "venue": "Cognitive Models of Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carey, S., \"The origin and evolution of everyday concepts (In R. N. Giere, ed.),\" Cognitive Models of Science, Minneapolis: University of Minnesota Press, 1992.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An Unsupervised Iterative Method for Chinese New Lexicon Extraction", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" }, { "first": "K", "middle": [ "Y" ], "last": "Su", "suffix": "" }, { "first": "; Y", "middle": [], "last": "", "suffix": "" }, { "first": "S", "middle": [], "last": "Lusignan", "suffix": "" } ], "year": 1983, "venue": "International Journal of Computational Linguistics & Chinese language Processing", "volume": "6", "issue": "1", "pages": "89--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, J. S. and K. Y. Su, \"An Unsupervised Iterative Method for Chinese New Lexicon Extraction,\" International Journal of Computational Linguistics & Chinese language Processing, 1997Choueka, Y. and S. Lusignan, \"A Connectionist Scheme for Modeling Word Sense Disambiguation,\" Cognition and Brain Theory, 6(1), 1983, pp.89-120.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Model for Lexical Analysis and Parsing of Chinese Sentences", "authors": [ { "first": "C", "middle": [ "G" ], "last": "Chen", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Chen", "suffix": "" }, { "first": "L", "middle": [ "S" ], "last": "Lee", "suffix": "" } ], "year": 1986, "venue": "Proceedings of 1986 International Conference on Chinese Computing", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, C.G., K.J. Chen and L.S. Lee, \"A Model for Lexical Analysis and Parsing of Chinese Sentences,\" Proceedings of 1986 International Conference on Chinese Computing, Singapore, 1986, pp.33-40.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unknown Word Extraction for Chinese Documents", "authors": [ { "first": "K", "middle": [ "J" ], "last": "Chen", "suffix": "" }, { "first": "W", "middle": [ "Y" ], "last": "Ma", "suffix": "" } ], "year": 2002, "venue": "Proceedings of 19 th COLING 2002", "volume": "", "issue": "", "pages": "169--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, K. J. and W. Y. Ma, \"Unknown Word Extraction for Chinese Documents,\" Proceedings of 19 th COLING 2002, Taipei, 2002, pp.169-175.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Chinese Grammar and English Grammar: a Comparative Study", "authors": [ { "first": "S", "middle": [ "C R" ], "last": "Chu", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chu, S. C. R., Chinese Grammar and English Grammar: a Comparative Study, The Commerical Press, Ltd. The Republic of China, 1982.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "ECONOMY IS A PERSON: A Chinese-English Corpora and Ontological-based Comparison Using the Conceptual Mapping Model", "authors": [ { "first": "S", "middle": [ "F" ], "last": "Chung", "suffix": "" }, { "first": "K", "middle": [], "last": "Ahrens", "suffix": "" }, { "first": "C", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 15th ROCLING Conference for the Association for Computational Linguistics and Chinese Language Processing", "volume": "", "issue": "", "pages": "87--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chung, S. F., Ahrens, K., and Huang C. \"ECONOMY IS A PERSON: A Chinese-English Corpora and Ontological-based Comparison Using the Conceptual Mapping Model,\" In Proceedings of the 15th ROCLING Conference for the Association for Computational Linguistics and Chinese Language Processing, National Tsing-Hwa University, Taiwan, 2003, pp.87-110.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Word Association Norms, Mutual Information, and Lexicongraphy", "authors": [ { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "P", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "1", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Church, K. W. and P. Hanks, \"Word Association Norms, Mutual Information, and Lexicongraphy,\" Computational Linguistics, 16(1), 1990, pp.22-29.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "the content and illustration of Sinica corpus of Academia Sinica", "authors": [], "year": 1995, "venue": "A study of Chinese Word Boundaries and Segmentation Standard for Information processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "CKIP(Chinese Knowledge Information processing Group), Technical Report no. 95-02, the content and illustration of Sinica corpus of Academia Sinica. Institute of Information Science, Academia Sinica, 1995. http://godel.iis.sinica.edu.tw/CKIP/r_content.html CKIP(Chinese Knowledge Information processing Group), A study of Chinese Word Boundaries and Segmentation Standard for Information processing (in Chinese). Technical Report, Taiwan, Taipei, Academia Sinica, 1996.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Integrating compositional semantics into a verb lexicon", "authors": [ { "first": "H", "middle": [ "T" ], "last": "Dang", "suffix": "" }, { "first": "K", "middle": [], "last": "Kipper", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2000, "venue": "COLING-2000 Eighteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dang, H. T., K. Kipper and M. Palmer, \"Integrating compositional semantics into a verb lexicon,\" COLING-2000 Eighteenth International Conference on Computational Linguistics, Saarbr\u00fccken, Germany, July 31 -August 4, 2000.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Tutorials of HowNet", "authors": [ { "first": "Z", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2004, "venue": "The First International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dong, Z., Tutorials of HowNet, The First International Joint Conference on Natural Language Processing (IJCNLP-04), 2004.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, C., WordNet: An Electronic Lexical Database, MIT Press, Cambridge, MA, 1998.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An Introduction to Language", "authors": [ { "first": "V", "middle": [], "last": "Fromkin", "suffix": "" }, { "first": "R", "middle": [], "last": "Rodman", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fromkin, V. and R. Rodman, An Introduction to Language, Sixth Edition, Holt, Rinehart and Winston, 1998.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Character-based Collection for Mandarin Chinese", "authors": [ { "first": "C", "middle": [ "R" ], "last": "Huang", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Chen", "suffix": "" }, { "first": "Y", "middle": [ "Y" ], "last": "Yang", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "540--543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C. R., K. J. Chen, Y. Y. Yang, \"Character-based Collection for Mandarin Chinese,\" In ACL 2000, 2000, pp.540-543.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Issues and Topics in Chinese Natural Language Processing", "authors": [ { "first": "C", "middle": [ "R" ], "last": "Huang", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Chen", "suffix": "" } ], "year": 1996, "venue": "Journal of Chinese Linguistics, Monograph", "volume": "9", "issue": "", "pages": "1--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C. R., K. J. Chen, \"Issues and Topics in Chinese Natural Language Processing,\" Journal of Chinese Linguistics, Monograph series number 9, 1996, pp.1-22.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Extracting Verb-Noun Collections from Text", "authors": [ { "first": "J", "middle": [ "Y" ], "last": "Jian", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 15th ROCLING Conference for the Association for Computational Linguistics and Chinese Language Processing", "volume": "", "issue": "", "pages": "295--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian, J. Y., \"Extracting Verb-Noun Collections from Text,\" In Proceedings of the 15th ROCLING Conference for the Association for Computational Linguistics and Chinese Language Processing, National Tsing-Hwa University, Taiwan, 2003, pp.295-302.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Class-Based Construction of a Verb Lexicon", "authors": [ { "first": "K", "middle": [], "last": "Kipper", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Dang", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2000, "venue": "AAAI-2000 Seventeenth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kipper K., H. T. Dang and M. Palmer, \"Class-Based Construction of a Verb Lexicon,\" AAAI-2000 Seventeenth National Conference on Artificial Intelligence, Austin, TX, July 30 -August 3, 2000.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Lexical Ambiguity and Information Retrieval", "authors": [ { "first": "R", "middle": [], "last": "Krovetz", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 1992, "venue": "ACM Transactions on Information Systems", "volume": "10", "issue": "2", "pages": "115--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krovetz, R. and W. B. Croft, \"Lexical Ambiguity and Information Retrieval,\" ACM Transactions on Information Systems, 10(2), 1992, pp.115-141.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Unknown Word and Phrase Extraction Using a Phrase-Like-Unit-based Likelihood Ratio", "authors": [ { "first": "Y", "middle": [ "S" ], "last": "Lai", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Wu", "suffix": "" } ], "year": 2000, "venue": "International Journal of Computer Processing Oriental Language", "volume": "13", "issue": "1", "pages": "83--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lai, Y. S. and Wu, C. H., \"Unknown Word and Phrase Extraction Using a Phrase-Like-Unit-based Likelihood Ratio,\" International Journal of Computer Processing Oriental Language, 13(1), 2000, pp.83-95.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Mandarin Chinese: a Functional Reference Grammar", "authors": [ { "first": "N", "middle": [ "C" ], "last": "Li", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Thompson", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, N. C. and S. A. Thompson, Mandarin Chinese: a Functional Reference Grammar, The Crane Publishing Co., Ltd. Taipei, Taiwan, 1997.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Using Collection Statistics in Information Extraction", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proc. of the Seventh Message Understanding Conference (MUC-7)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D., \"Using Collection Statistics in Information Extraction,\" In Proc. of the Seventh Message Understanding Conference (MUC-7), 1998.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "WordNet: An On-Line Lexical Database", "authors": [ { "first": "G", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "3", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller G., \"WordNet: An On-Line Lexical Database,\" International Journal of Lexicography, 3(4), 1990.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Origins of the Standard Upper Merged Ontology: A Proposal for the IEEE Standard Upper Ontology", "authors": [ { "first": "I", "middle": [], "last": "Niles", "suffix": "" }, { "first": "A", "middle": [], "last": "Pease", "suffix": "" } ], "year": 2001, "venue": "Working Notes of the IJCAI-2001 Workshop on the IEEE Standard Upper Ontology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niles, I., and Pease, A, \"Origins of the Standard Upper Merged Ontology: A Proposal for the IEEE Standard Upper Ontology,\" In Working Notes of the IJCAI-2001 Workshop on the IEEE Standard Upper Ontology, Seattle, Washington, August 6, 2001. On-Line United Daily News, http://udnnews.com/NEWS/", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Distinguishing Systems and Distinguishing Senses: New Evaluation Methods for Word Sense Disambiguation", "authors": [ { "first": "P", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2000, "venue": "Natural Language Engineering", "volume": "5", "issue": "3", "pages": "113--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, P. and D. Yarowsky, \"Distinguishing Systems and Distinguishing Senses: New Evaluation Methods for Word Sense Disambiguation,\" Natural Language Engineering, 5(3), 2000, pp.113-133.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Retrieving Collections from Text: Xtract", "authors": [ { "first": "F", "middle": [], "last": "Smadjia", "suffix": "" } ], "year": null, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "143--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smadjia, F., \"Retrieving Collections from Text: Xtract,\" Computational Linguistics, 19(1), pp.143-177", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Translating Collections for Bilingual Lexicons: A Statistical Approach", "authors": [ { "first": "F", "middle": [], "last": "Smadjia", "suffix": "" }, { "first": "K", "middle": [ "R" ], "last": "Mckeown", "suffix": "" }, { "first": "V", "middle": [], "last": "Hatzivassiloglou", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smadjia, F., K. R. McKeown, and V. Hatzivassiloglou, \"Translating Collections for Bilingual Lexicons: A Statistical Approach,\" Computational Linguistics, 22(1) 1996, pp.1-38.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Lexical Ambiguity Resolution", "authors": [ { "first": "S", "middle": [], "last": "Small", "suffix": "" }, { "first": "G", "middle": [], "last": "Cottrell", "suffix": "" }, { "first": "M", "middle": [ "E" ], "last": "Tannenhaus", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Small, S., and G. Cottrell, and M. E. Tannenhaus, Lexical Ambiguity Resolution, Morgan Kaufmann, Palo Alto, Calif., 1988.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Ontologies for Agent-Based Information Retrieval and Sequence Mining", "authors": [ { "first": "D", "middle": [], "last": "Subrata", "suffix": "" }, { "first": "K", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "C", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Workshop on Ontologies in Agent Systems (OAS02), held at the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems", "volume": "", "issue": "", "pages": "15--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Subrata D., Shuster K., and Wu, C., \"Ontologies for Agent-Based Information Retrieval and Sequence Mining,\" In Proceedings of the Workshop on Ontologies in Agent Systems (OAS02), held at the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems Bologna, Italy, July, 2002, pp.15-19.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Chinese Named Entity Identification Using Class-based Language Model", "authors": [ { "first": "J", "middle": [], "last": "Sun", "suffix": "" }, { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "L", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "C", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2000, "venue": "the Proceedings of 19 th COLING", "volume": "", "issue": "", "pages": "967--973", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, J., J. Gao, L. Zhang, M. Zhou and C. Huang, \"Chinese Named Entity Identification Using Class-based Language Model,\" In the Proceedings of 19 th COLING 2002, Taipei, 2000, pp.967-973.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A Stochastic Finite-State Word-Segmentation Algorithm for Chinese", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "C", "middle": [], "last": "Shih", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "3", "pages": "377--404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, R. and C. Shih, \"A Stochastic Finite-State Word-Segmentation Algorithm for Chinese,\" Computational Linguistics, 22(3), 1996, pp.377-404.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A compression-based algorithm for chinese word segmentation", "authors": [ { "first": "W", "middle": [ "J" ], "last": "Teahan", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wen", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mcnab", "suffix": "" }, { "first": "I", "middle": [ "H" ], "last": "Witten", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "", "pages": "375--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teahan, W.J., Wen, Y., McNab, R.J., Witten, I.H., \"A compression-based algorithm for chinese word segmentation,\" Computational Linguistics, 26, 2000, pp.375-393.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Word sense disambiguation and sense-based NV event-frame identifier", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Tsai", "suffix": "" }, { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" }, { "first": "J", "middle": [ "W" ], "last": "Su", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "7", "issue": "", "pages": "29--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsai, J. L, W. L. Hsu and J. W. Su, \"Word sense disambiguation and sense-based NV event-frame identifier,\" Computational Linguistics and Chinese Language Processing, Vol. 7, No. 1, February 2002, pp.29-46.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Applying NVEF Word-Pair Identifier to the Chinese Syllable-to-Word Conversion Problem", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Tsai", "suffix": "" }, { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of 19 th COLING", "volume": "", "issue": "", "pages": "1016--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsai, J. L, W. L. Hsu, \"Applying NVEF Word-Pair Identifier to the Chinese Syllable-to-Word Conversion Problem,\" Proceedings of 19 th COLING 2002, Taipei, 2002, pp.1016-1022.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Chinese Word Auto-Confirmation Agent", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Tsai", "suffix": "" }, { "first": "C", "middle": [ "L" ], "last": "Sung", "suffix": "" }, { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 15th ROCLING Conference for the Association for Computational Linguistics and Chinese Language Processing", "volume": "", "issue": "", "pages": "175--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsai, J. L, C. L. Sung and W. L. Hsu, \"Chinese Word Auto-Confirmation Agent,\" In Proceedings of the 15th ROCLING Conference for the Association for Computational Linguistics and Chinese Language Processing, National Tsing-Hwa University, Taiwan, 2003, pp.175-192.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Text Categorization Using Automatically Acquired Domain Ontology", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Wu", "suffix": "" }, { "first": "T", "middle": [ "H" ], "last": "Tsai", "suffix": "" }, { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" } ], "year": 2003, "venue": "proceedings of the Sixth International Workshop on Information Retrieval with Asian Languages (IRAL-03)", "volume": "", "issue": "", "pages": "138--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, S. H., T. H. Tsai, and W. L. Hsu, \"Text Categorization Using Automatically Acquired Domain Ontology,\" In proceedings of the Sixth International Workshop on Information Retrieval with Asian Languages (IRAL-03), Sapporo, Japan, 2003, pp.138-145.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Domain Event Extraction and Representation with Domain Ontology", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Wu", "suffix": "" }, { "first": "T", "middle": [ "H" ], "last": "Tsai", "suffix": "" }, { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" } ], "year": 2003, "venue": "proceedings of the IJCAI-03 Workshop on Information Integration on the Web", "volume": "", "issue": "", "pages": "33--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, S. H., T. H. Tsai, and W. L. Hsu, \"Domain Event Extraction and Representation with Domain Ontology,\" In proceedings of the IJCAI-03 Workshop on Information Integration on the Web, Acapulco, Mexico, 2003, pp.33-38.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A study of Semantic Disambiguation Based on HowNet", "authors": [ { "first": "X", "middle": [], "last": "Yang", "suffix": "" }, { "first": "T", "middle": [], "last": "Li", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "7", "issue": "", "pages": "47--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, X. and Li T., \"A study of Semantic Disambiguation Based on HowNet,\" Computational Linguistics and Chinese Language Processing, Vol. 7, No. 1, February 2002, pp.47-78.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "\u6731\u66c9\u4e9e\uff0c\u73fe\u4ee3\u6f22\u8a9e\uf906\u6a21\u7814\u7a76(Studies on Semantic Structure Patterns of Sentence in Modren Chinese)\uff0c\uf963\u4eac\u5927\u5b78\u51fa\u7248\u793e\uff0c2001", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u6731\u66c9\u4e9e\uff0c\u73fe\u4ee3\u6f22\u8a9e\uf906\u6a21\u7814\u7a76(Studies on Semantic Structure Patterns of Sentence in Modren Chinese)\uff0c\uf963\u4eac\u5927\u5b78\u51fa\u7248\u793e\uff0c2001.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "An illustration of the KR-tree using \u4eba\u5de5\u7269 (artifact) as an example of a noun-sense subclass. The English words in parentheses are provided for explanatory purposes only.", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "AUTO-NVEF flow chart.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "NVEF sense-pair [attribute|\u5c6c\u6027,ability|\u80fd\uf98a,&eat|\u5403 ] -[subtract|\u524a\u6e1b ] and the NVEF word-pair [ \u98df \uf97e (eating capacity)] -[ \u6e1b \u5c11 (decrease)]. The principles for confirming meaningful NVEF knowledge are given in section 3.2. Appendix D provides a snapshot of the designed user interface for evaluators for manually to use to confirm genera k Chinese sentence \u9ad8 \u5ea6 \u58d3 \u529b (High pressure) \u4f7f (make) \u6709 \u4e9b (some) \u4eba (people) \u98df \u91cf (eating capacity)\u98df\u91cf (eating capacity) \u52d5\u8a5e (Verb) \u6e1b\u5c11 (decrease)", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "The user interface for confirming NVEF knowledge using the generated NVEF knowledge for the Chinese sentence \u9ad8\ufa01\u58d3\uf98a(High pressure)\u4f7f(makes)\u6709\u4e9b(some)\u4eba(people)\u98df\uf97e(eating capacity)\u6e1b\u5c11(decrease). The English words in parentheses are provided for explanatory purposes only. [ ] indicate nouns and <> indicate verbs.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "text": "", "type_str": "table", "html": null, "content": "
C.Word a E.Word aPart-of-speechSense (i.e. DEF in HowNet)
\uf902Che[surname] Nouncharacter|\u6587\u5b57,surname|\u59d3,human|\u4eba,ProperName|\u5c08
\u8ecacarNounLandVehicle|\u8eca
\u8ecaturnVerbcut|\u5207\u524a
", "num": null }, "TABREF5": { "text": "", "type_str": "table", "html": null, "content": "
BackwardForwardBackward = Forward
Accuracy82.5%81.7%86.86%
Recall100%100%89.33%
", "num": null }, "TABREF6": { "text": "", "type_str": "table", "html": null, "content": "
Noun Verb Adjective Adverb Preposition Conjunction Expletive Structural Particle
CKIPNVADPCTDe
HowNetNVADJADVPPCONJECHOSTRU
", "num": null }, "TABREF7": { "text": "2 =\u6e67\u5165, ADJ 3 =\u8a31\u591a and N 4 =\u89c0\u773e. Since the corresponding NV word-pairs for the FPOSN 1 V 2 ADJ 3 N 4 are N 1 V 2 and N 4 V 2 , AUTO-NVEF will generate two NV word-pairs \u73fe\u5834(N)\u6e67\u5165(V) and\u6e67\u5165(V)\u89c0\u773e (N).In[\u6731\u66c9\u4e9e 2001], there are some useful semantic structure patterns of Modern Chinese sentences for creating FPOS mappings and their corresponding NV word-pairs. 3) Generate NV knowledge. According to HowNet, AUTO-NVEF computes all the NV sense-pairs for the generated NV word-pairs. Consider the generated NV word-pairs \u73fe\u5834", "type_str": "table", "html": null, "content": "
NV1 = [\u73fe\u5834(locale)/place|\u5730\u65b9,#fact|\u4e8b\u60c5/N] -[\u6e67\u5165(enter)/GoInto|\u9032\u5165/V], and
NV2 = [\u89c0\u773e(audience)/human|\u4eba,*look|\u770b,#entertainment|\u85dd,#sport|\u80b2,*recreation|
\u5a1b\u6a02/N] -[\u6e67\u5165(enter)/GoInto|\u9032\u5165/V].
(N)\u6e67\u5165(V) and \u6e67\u5165(V)\u89c0\u773e(N). AUTO-NVEF will generate two collections of NV
knowledge:
", "num": null }, "TABREF8": { "text": "", "type_str": "table", "html": null, "content": "
One was the 2001 UDN corpus containing
4,539,624 Chinese sentences that were extracted from the United Daily News Web site
[On-Line United Daily News] from January 17, 2001 to December 30, 2001. The other was a
collection of specific text types, which included research reports, classical literature and
modern literature. The details of the training, testing corpora and test sentence sets are given
below
selected all the
sentences extracted from the news of October 27, 2001, November 23, 2001 and
December 17, 2001 in 2001 UDN as our first test sentence set. From the second
testing corpus, we selected a research report, a classical novel and a modern novel for
our second test sentence s
NVEF accuracy
News article dateNVEF-ACNVEF-EWNVEF-AC + NVEF-EW
October 27, 200199.54%(656/659)98.43%(439/446)99.10% (1,095/1,105)
November 23, 200198.75%(711/720)95.95%(379/395)97.76% (1,090/1,115)
December 17, 200198.74%(1,015/1,028)98.53%(1,141/1,158)98.63% (2,156/2,186)
Total Average98.96%(2,382/2,407)98.00%(1,959/1,999)98.52% (4,341/4,406)
", "num": null }, "TABREF9": { "text": "", "type_str": "table", "html": null, "content": "
NVEF accuracy
Text typeNVEF-ACNVEF-EWNVEF-AC + NVEF-EW
Technique Report97.12%(236/243)96.61%(228/236)96.86% (464/479)
Classic novel98.64%(218/221)93.55%(261/279)95.80% (479/500)
Modern novel98.18%(377/384)95.42%(562/589)96.51% (939/973)
Total Average98.00%(831/848)95.20%(1,051/1,104)96.41% (1,882/1,952)
", "num": null }, "TABREF10": { "text": "", "type_str": "table", "html": null, "content": "
TypeExample SentenceNoun / DEFVerb / DEFPercentage
N:V[\u5de5\u7a0b]<\u5b8c\u6210> (The construction is now completed)\u5de5\u7a0b (construction) affairs|\u4e8b\u52d9,industrial|\u5de5\u5b8c\u6210 (complete) fulfill|\u5be6\u73fe24.15%
N-V\u5168\u90e8[\u5de5\u7a0b]\u9810\u5b9a\u5e74\u5e95<\u5b8c\u6210> (All of constructions will be completed by the end of year)\u5de5\u7a0b (construction) affairs|\u4e8b\u52d9,industrial|\u5de5\u5b8c\u6210 (complete) fulfill|\u5be6\u73fe43.83%
V:N<\u5b8c\u6210>[\u5de5\u7a0b] (to complete a construction)\u5de5\u7a0b (construction) affairs|\u4e8b\u52d9,industrial|\u5de5\u5b8c\u6210 (complete) fulfill|\u5be6\u73fe19.61%
\u5efa\u5546\u627f\u8afe\u5728\u5e74\u5e95\u524d<\u5b8c\u6210>
V-N\u9435\u8def[\u5de5\u7a0b] (The building contractor promise to complete railway construction\u5de5\u7a0b (construction) affairs|\u4e8b\u52d9,industrial|\u5de5\u5b8c\u6210 (complete) fulfill|\u5be6\u73fe12.41%
before the end of this year)
", "num": null }, "TABREF11": { "text": "", "type_str": "table", "html": null, "content": "
TypeExample SentenceNounVerbPercentage
N1V1\u7136\u5f8c\u5c31<\u68c4>[\u6211]\u800c\u53bb\u6211(I)\u68c4(give up)6.4%
N1V2+<\u89ba\u5f97>[\u4ed6]\u5f88\u5b5d\u9806\u4ed6(he)\u89ba\u5f97(feel)6.8%
N2+V1<\u8cb7>\u4e86[\u53ef\u6a02]\u4f86\u559d\u53ef\u6a02(cola)\u8cb7(buy)22.2%
N2+V2+<\u5f15\u7206>\u53e6\u4e00\u5834\u7f8e\u897f[\u6230\u722d]\u6230\u722d(war)\u5f15\u7206(cause)64.6%
", "num": null }, "TABREF12": { "text": "", "type_str": "table", "html": null, "content": "
TopVerb of N1V1 / Example SentencePercentage of N1V1Verb of N2+V1 / Example SentencePercentage of N2+V1
1\u6709(have) / [\u6211]<\u6709>\u4e5d\u9805\u7372\u53c3\u8cfd\u8cc7\u683c16.5%\u662f(be) / \u518d\u4f86\u5c31<\u662f>\u4e00\u9593\u9673\u5217\u6a02\u5668\u7684[\u623f\u5b50]20.5%
2\u662f(be) / [\u5b83]<\u662f>\u505a\u4eba\u7684\u6839\u672c8.8%\u6709(have) / \u662f\u4e0d\u662f<\u6709>[\u554f\u984c]\u4e8615.5%
3\u8aaa(speak) / [\u4ed6]<\u8aaa>7.7%\u8aaa(speak) / \u800c\u8ac7\u5230\u6210\u529f\u7684\u79d8\u8a23[\u59ae\u5a1c]<\u8aaa>3.9%
4\u770b(see) / <\u770b>\u8457[\u5b83]\u88ab\u5361\u8eca\u8f09\u8d704.4%\u5230(arrive) / \u4e00[\u5230]<\u9670\u5929>3.6%
5\u8cb7(buy) / \u7f8e\u570b\u672c\u571f\u7684\u4eba\u6975\u5c11\u5230\u90a3\u5152< \u8cb7>[\u5730]3.3%\u8b93(let) / <\u8b93>\u73fe\u8077[\u4eba\u54e1]\u7121\u8655\u68f2\u8eab2.5%
", "num": null }, "TABREF13": { "text": "", "type_str": "table", "html": null, "content": "
The Top 5 multi-character verbs in N1V2+ and N2+V2+ word-pairs in
manually-edited NVEF knowledge for 1,000 randomly selected ASBC
sentences and their percentages. The English words in parentheses are
provided for explanatory purposes only. [ ] indicate nouns and <>
indicate verbs.
TopVerb of N1V2+ /PercentageVerb of N2+V2+ /Percentage
Example Sentenceof N1V2+Example Sentenceof N2+V2+
1\u5403\u5230(eat) / \u4f60\u4e5f\u53ef\u80fd<\u5403\u5230>\u6bd2[\u9b5a]2.06%\u9019\u4f4d[\u5b98\u54e1]<\u8868\u793a> \u8868\u793a(express) /1.2%
2\u77e5\u9053(know) / [\u6211]<\u77e5\u9053>\u54e62.06%\u4f7f\u7528(use) / \u6b4c\u8a5e<\u4f7f\u7528>\u65e5\u5e38\u751f\u6d3b[\u8a9e\u8a00]1.1%
3\u559c\u6b61(like) / \u81f3\u5c11\u9084\u6709\u4eba<\u559c\u6b61>[\u4ed6]2.06%\u6c92\u6709(not have) / \u6211\u5011\u5c31<\u6c92\u6709>\u4ec0\u9ebc[\u5229\u6f64]\u4e860.9%
4\u5145\u6eff(fill) / [\u5fc3]\u88e1\u5c31<\u5145\u6eff>\u4e86\u611f\u52d5\u8207\u611f\u60692.06%\u5305\u62ec(include) / <\u5305\u62ec>\u88ab\u76e3\u7981\u7684\u6c11\u904b[\u4eba\u58eb]0.8%
5\u6253\u7b97(plan) / [\u4f60]<\u6253\u7b97>\u600e\u9ebc\u8a662.06%\u6210\u70ba(become) / \u9019\u7a2e\u8207\u4e0a\u53f8<\u6210\u70ba>\u77e5\u5fc3[\u670b\u53cb]\u7684\u4f5c\u6cd50.7%
", "num": null }, "TABREF14": { "text": "", "type_str": "table", "html": null, "content": "
TypeConfirmation Principle for Non-Meaningful NVEF KnowledgePercentage
1 *NV Word-pair that cannot make a correct or sensible POS tag for the Chinese33%
sentence(33/100)
2 *The combination of an NV sense-pair (DEF) and an NV word-pair that cannot be17%
an NVEF knowledge collection(17/100)
3 *One word sense in an NV word-pair that does not inherit its corresponding noun2%
sense or verb sense(2/100)
4The NV word-pair is not an NVEF word-pair for the sentence although it satisfies1%
all the confirmiation principles(1/100)
5Incorrect word POS in HowNet1%
(1/100)
6Incorrect word sense in HowNet3%
(3/100)
7No proper definition in HowNet
Ex:\u66ab\u5c45(temporary residence) has two meanings: one is <reside|\u4f4f\u4e0b>(\u7dca\u6025\u66ab \u5c45\u670d\u52d9(emergency temporary residence service))and another is <situated| \u8655,Timeshort|\u66ab> (SARS \u5e36\u4f86\u66ab\u6642\u6027\u7684\u7d93\u6fdf\u9707\u76ea(SARS will produce only a7% (7/100)
temporary economic shock))
8Noun senses or verb senses that are used in Old Chinese3%
(3/100)
9Word sense disambiguation failure
(1) Polysemous words
(2) Proper nouns identified as common words27%
Ex: \u516c \u725b \u968a (Chicago Bulls)\u516c \u725b (bull) <livestock| \u7272 \u755c > \uff1b \u592a \u967d \u968a(27/100)
(Phoenix Suns) \u592a\u967d(Sun) <celestial|\u5929\u9ad4>\uff1b\u82b1\u6728\u862d(HwaMulan)
\u6728\u862d(magnolia)< FlowerGrass|\u82b1\u8349>
10Unknown word problem4%
(4/100)
11Word segmentation error2%
(2/100)
Type 1,2 and 3 errors are the failed results from the three confirmation principles for meaningful NVEF
knowledge mentioned in section 3.2, respectively.
", "num": null }, "TABREF15": { "text": "", "type_str": "table", "html": null, "content": "
NPTest SentenceNoun / DEFVerb / DEF
type
\u8b66\u65b9\u7dad\u8b77\u5730\u65b9[\u6cbb\u5b89]<\u8f9b\u52de>\u6cbb\u5b89 (public security)\u8f9b\u52de (work hard)
1local security.) (Police work hard to safeguard\u6cc1,safe|\u5b89,politics| attribute|\u5c6c\u6027,circumstances|\u5883endeavour|\u8ce3\u529b
\u653f,&organization|\u7d44\u7e54
<\u6a21\u7cca>\u7684[\u767d\u5bae]\u666f\u8c61\u767d\u5bae (White House)\u6a21\u7cca (vague)
2(The White House lookedhouse|\u623f\u5c4b,institution|\u6a5fPolysemousWord|\u591a\u7fa9
vague in the heavy fog.)\u69cb,#politics|\u653f,(US|\u7f8e\u570b)\u8a5e,CauseToDo|\u4f7f\u52d5,mix|\u6df7\u5408
<\u751f\u6d3b>\u689d\u4ef6[\u4e0d\u8db3]\u4e0d\u8db3 (lack)\u751f\u6d3b (life)
3(Lack of living conditions)attribute|\u5c6c\u6027,fullness|\u7a7aalive|\u6d3b\u8457
\u6eff,incomplete|\u7f3a,&entity|\u5be6\u9ad4
\u7db2\u8def\u5e36\u7d66[\u4f01\u696d]\u8a31\u591a<\u4fbf\u5229>\u4f01\u696d (Industry)\u4fbf\u5229 (benefit)
4benefits to industries.) (The Internet brings numerous\u9020,*sell|\u8ce3,industrial| InstitutePlace|\u5834\u6240,*produce|\u88fdbenefit|\u4fbf\u5229
\u5de5,commercial|\u5546
<\u76c8\u76c8>[\u7b11\u9768]\u7b11\u9768 (a smiling face)\u76c8\u76c8 (an adjective normally
5(smile radiantly)part|\u90e8\u4ef6,%human|\u4eba,skin|\u76aeused to describe someone's beautiful smile)
exist|\u5b58\u5728
\u4fdd\u8cbb\u8f03\u8cb4\u7684<\u58fd\u96aa>[\u4fdd\u55ae]\u4fdd\u55ae (insurance policy)\u58fd\u96aa (life insurance)
6(higher cost life insurance policy)bill|\u7968\u64da,*guarantee|\u4fdd\u8b49guarantee|\u4fdd\u8b49,scope=die|\u6b7b,
commercial|\u5546
\u50b5\u5238\u578b\u57fa\u91d1\u5438\u91d1[\u5b58\u6b3e]<\u5931\u8840>\u5b58\u6b3e (bank savings)\u5931\u8840 (bleed or lose(only used
7Bond foundation makes profitmoney|\u8ca8\u5e63,$SetAside|\u7559\u5b58in finance diction))
but savings are lostbleed|\u51fa\u8840
\u83ef\u5357[\u9280\u884c] \u4e2d\u5c71<\u5206\u884c>\u9280\u884c (bank)\u5206\u884c (branch)
(Hwa-Nan Bank, Jung-San Branch)InstitutePlace|\u5834\u6240,@Setseparate|\u5206\u96e2
8Aside|\u7559\u5b58,@TakeBack|\u53d6
\u56de,@lend|\u501f\u51fa,#wealth|\u9322
\u8ca1,commercial|\u5546
9[\u6839\u64da]<\u8abf\u67e5> (according to the investigation)\u6839\u64da (evidence) information|\u4fe1\u606f\u8abf\u67e5 (investigate) investigate|\u8abf\u67e5
10<\u96f6\u552e>[\u901a\u8def] (retailer)\u901a\u8def (route) facilities|\u8a2d\u65bd,route|\u8def\u96f6\u552e (retail sales) sell|\u8ce3
11\u5f9e\u4eca\u65e5<\u8d77\u5230> 5[\u6708\u5e95]\u6708\u5e95 (the end of the month)\u8d77\u5230 (to elaborate)
(from today to the end of May)time|\u6642\u9593,ending|\u672b,month|\u6708do|\u505a
", "num": null } } } }