{ "paper_id": "O04-1009", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:00:24.344525Z" }, "title": "Applying Meaningful Word-Pair Identifier to the Chinese Syllable-to-Word Conversion Problem", "authors": [ { "first": "Jia-Lin", "middle": [], "last": "Tsai", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Nankang, Taipei", "country": "Taiwan, R.O.C" } }, "email": "tsaijl@iis.sinica.edu.tw" }, { "first": "Tien-Jien", "middle": [], "last": "Chiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Nankang, Taipei", "country": "Taiwan, R.O.C" } }, "email": "" }, { "first": "Wen-Lian", "middle": [], "last": "Hsu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Nankang, Taipei", "country": "Taiwan, R.O.C" } }, "email": "hsu@iis.sinica.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Syllable-to-word (STW) conversion is a frequently used Chinese input method that is fundamental to syllable/speech understanding. The two major problems with STW conversion are the segmentation of syllable input and the ambiguities caused by homonyms. This paper describes a meaningful word-pair (MWP) identifier that can be used to resolve homonym/segmentation ambiguities and perform STW conversion effectively for Chinese language texts. It is designed as a support system with Chinese input systems. In this paper, five types of meaningful word-pairs are investigated, namely: noun-verb (NV), noun-noun (NN), verb-verb (VV), adjective-noun (AN) and adverb-verb (DV). The pre-collected datasets of meaningful word-pairs are based on our previous work auto-generation of NVEF knowledge in Chinese (AUTO-NVEF) [30, 32], where NVEF stands for noun-verb event frame. The main purpose of this study is to illustrate that a hybrid approach of combining statistical language modeling (SLM) with contextual information, such as meaningful word-pairs, is effective for improving syllable-to-word systems and is important for syllable/speech understanding. Our experiments show the following: (1) the MWP identifier achieves tonal (syllables with four tones) and toneless (syllables without four tones) STW accuracies of 98.69% and 90.7%, respectively, among the identified word-pairs for the test syllables; (2) by STW error analysis, we find that the major critical problem of tonal STW systems is the failure of homonym disambiguation (52%), while that of toneless STW systems is inadequate syllable segmentation (48%); (3) by applying the MWP identifier, together with the Microsoft input method editor (MSIME 2003) and an optimized bigram model (BiGram), the tonal and toneless STW improvements of the two STW systems are 25.25%/21.82% and 12.87%/15.62%, respectively.", "pdf_parse": { "paper_id": "O04-1009", "_pdf_hash": "", "abstract": [ { "text": "Syllable-to-word (STW) conversion is a frequently used Chinese input method that is fundamental to syllable/speech understanding. The two major problems with STW conversion are the segmentation of syllable input and the ambiguities caused by homonyms. This paper describes a meaningful word-pair (MWP) identifier that can be used to resolve homonym/segmentation ambiguities and perform STW conversion effectively for Chinese language texts. It is designed as a support system with Chinese input systems. In this paper, five types of meaningful word-pairs are investigated, namely: noun-verb (NV), noun-noun (NN), verb-verb (VV), adjective-noun (AN) and adverb-verb (DV). The pre-collected datasets of meaningful word-pairs are based on our previous work auto-generation of NVEF knowledge in Chinese (AUTO-NVEF) [30, 32], where NVEF stands for noun-verb event frame. The main purpose of this study is to illustrate that a hybrid approach of combining statistical language modeling (SLM) with contextual information, such as meaningful word-pairs, is effective for improving syllable-to-word systems and is important for syllable/speech understanding. Our experiments show the following: (1) the MWP identifier achieves tonal (syllables with four tones) and toneless (syllables without four tones) STW accuracies of 98.69% and 90.7%, respectively, among the identified word-pairs for the test syllables; (2) by STW error analysis, we find that the major critical problem of tonal STW systems is the failure of homonym disambiguation (52%), while that of toneless STW systems is inadequate syllable segmentation (48%); (3) by applying the MWP identifier, together with the Microsoft input method editor (MSIME 2003) and an optimized bigram model (BiGram), the tonal and toneless STW improvements of the two STW systems are 25.25%/21.82% and 12.87%/15.62%, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "More than 100 Chinese input methods have been developed in the past [1, 17, 12, 5, 18, 10, 19, 16, 4, 28, 11, 20] . Their underlying approaches can be classified into four types:", "cite_spans": [ { "start": 68, "end": 71, "text": "[1,", "ref_id": null }, { "start": 72, "end": 75, "text": "17,", "ref_id": "BIBREF16" }, { "start": 76, "end": 79, "text": "12,", "ref_id": "BIBREF11" }, { "start": 80, "end": 82, "text": "5,", "ref_id": "BIBREF4" }, { "start": 83, "end": 86, "text": "18,", "ref_id": "BIBREF17" }, { "start": 87, "end": 90, "text": "10,", "ref_id": "BIBREF9" }, { "start": 91, "end": 94, "text": "19,", "ref_id": "BIBREF18" }, { "start": 95, "end": 98, "text": "16,", "ref_id": "BIBREF15" }, { "start": 99, "end": 101, "text": "4,", "ref_id": "BIBREF3" }, { "start": 102, "end": 105, "text": "28,", "ref_id": "BIBREF26" }, { "start": 106, "end": 109, "text": "11,", "ref_id": "BIBREF10" }, { "start": 110, "end": 113, "text": "20]", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(a) Optical character recognition (OCR) based [5] , (b) On-line handwriting based [19] , (c) Speech based [4, 10] , and (d) Keyboard based, such as syllabic-input-to-character [27, 16, 2, 14, 15, 22] ; arbitrary codes based [8] ; and structure scheme based [11] . The major goal of these syllable input systems is to achieve high STW accuracy, but syllable understanding is rarely considered [16] . Currently, the most popular method for Chinese input is syllable based (or phonetic/pinyin based), because Chinese people are taught to write the corresponding phonetic/pinyin syllable of each Chinese character in primary school. Basically, each Chinese character corresponds to at least one syllable. Although there are more than 13,000 distinct Chinese characters (of which 5,400 are commonly used), there are only 1,300 distinct syllables. The homonym (homophone) problem is, therefore, quite severe when using a Chinese phonetic input method [5] . As per [26] , each Chinese syllable can be mapped from 3 to over 100 Chinese characters, with the average number of characters per syllable being 17. Therefore, homonym disambiguation is a critical problem that requires the development of an effective syllable-to-word (STW) conversion system for Chinese. A comparable problem for STW conversion in English is word-sense disambiguation (WSD).", "cite_spans": [ { "start": 46, "end": 49, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 82, "end": 86, "text": "[19]", "ref_id": "BIBREF18" }, { "start": 106, "end": 109, "text": "[4,", "ref_id": "BIBREF3" }, { "start": 110, "end": 113, "text": "10]", "ref_id": "BIBREF9" }, { "start": 176, "end": 180, "text": "[27,", "ref_id": "BIBREF25" }, { "start": 181, "end": 184, "text": "16,", "ref_id": "BIBREF15" }, { "start": 185, "end": 187, "text": "2,", "ref_id": "BIBREF1" }, { "start": 188, "end": 191, "text": "14,", "ref_id": "BIBREF13" }, { "start": 192, "end": 195, "text": "15,", "ref_id": "BIBREF14" }, { "start": 196, "end": 199, "text": "22]", "ref_id": "BIBREF21" }, { "start": 224, "end": 227, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 257, "end": 261, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 392, "end": 396, "text": "[16]", "ref_id": "BIBREF15" }, { "start": 945, "end": 948, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 958, "end": 962, "text": "[26]", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There are two conventional approaches for STW conversion: the linguistic approach based on syntax parsing, semantic template matching and contextual information [18, 22, 16, 28, 15] ; and the statistical approach based on the n-gram model where n is usually 2 or 3 [12, 10, 11, 20, 21, 13, 27] . Although the linguistic approach requires considerable effort in designing effective syntax rules, semantic templates or contextual information, it is more user-friendly than the statistical approach (i.e. it is easier to understand why such a system makes a mistake) [16] . On the other hand, the statistical language model (SLM) used in the statistical approach requires less effort and has been widely adopted in commercial systems. However, the power of the statistical approach depends on the training corpus [10] and the SLM pays little attention to syllable understanding [16] . Following the work of [12, 18, 10, 28, 11, 15, 13] , a better approach to STW conversion is to integrate both linguistic knowledge (such as contextual information) and statistical approaches (such as an n-gram model). We believe that our research proves the efficacy of such an integrated approach.", "cite_spans": [ { "start": 161, "end": 165, "text": "[18,", "ref_id": "BIBREF17" }, { "start": 166, "end": 169, "text": "22,", "ref_id": "BIBREF21" }, { "start": 170, "end": 173, "text": "16,", "ref_id": "BIBREF15" }, { "start": 174, "end": 177, "text": "28,", "ref_id": "BIBREF26" }, { "start": 178, "end": 181, "text": "15]", "ref_id": "BIBREF14" }, { "start": 265, "end": 269, "text": "[12,", "ref_id": "BIBREF11" }, { "start": 270, "end": 273, "text": "10,", "ref_id": "BIBREF9" }, { "start": 274, "end": 277, "text": "11,", "ref_id": "BIBREF10" }, { "start": 278, "end": 281, "text": "20,", "ref_id": "BIBREF19" }, { "start": 282, "end": 285, "text": "21,", "ref_id": "BIBREF20" }, { "start": 286, "end": 289, "text": "13,", "ref_id": "BIBREF12" }, { "start": 290, "end": 293, "text": "27]", "ref_id": "BIBREF25" }, { "start": 564, "end": 568, "text": "[16]", "ref_id": "BIBREF15" }, { "start": 810, "end": 814, "text": "[10]", "ref_id": "BIBREF9" }, { "start": 875, "end": 879, "text": "[16]", "ref_id": "BIBREF15" }, { "start": 904, "end": 908, "text": "[12,", "ref_id": "BIBREF11" }, { "start": 909, "end": 912, "text": "18,", "ref_id": "BIBREF17" }, { "start": 913, "end": 916, "text": "10,", "ref_id": "BIBREF9" }, { "start": 917, "end": 920, "text": "28,", "ref_id": "BIBREF26" }, { "start": 921, "end": 924, "text": "11,", "ref_id": "BIBREF10" }, { "start": 925, "end": 928, "text": "15,", "ref_id": "BIBREF14" }, { "start": 929, "end": 932, "text": "13]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "According to previous studies [5, 28, 11, 20, 9] , besides homonyms, correct syllable-word segmentation is another crucial problem of STW conversion. Incorrect syllable-word segmentation directly influences the conversion rate of STW. For example, consider the syllable sequence \"yi1 du4 ji4 yu2 zhong1 guo2 de5 niang4 jiu3 ji4 shu4\" of the sentence \" (once) (covet) (China) (of) (making-wine) (technique).\" According to the CKIP lexicon [6] , the two possible syllable-word segmentations are: (F) \"yi1/du4ji4/yu2/zhong1guo2/de5/niang4jiu3/ji4shu4\"; and (B) \"yi1/du4/ji4yu2/zhong1guo2/de5/niang4jiu3/ji4shu4.\" (We use the forward (F) and the backward (B) longest syllable-word first strategies [3] , and \"/\" to indicate a syllable-word boundary).", "cite_spans": [ { "start": 30, "end": 33, "text": "[5,", "ref_id": "BIBREF4" }, { "start": 34, "end": 37, "text": "28,", "ref_id": "BIBREF26" }, { "start": 38, "end": 41, "text": "11,", "ref_id": "BIBREF10" }, { "start": 42, "end": 45, "text": "20,", "ref_id": "BIBREF19" }, { "start": 46, "end": 48, "text": "9]", "ref_id": "BIBREF8" }, { "start": 438, "end": 441, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 694, "end": 697, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Among the above syllable-word segmentations, there is an ambiguous syllable-word section: /du4ji4/yu2/ (/{ }/{ , , , , , , , , , , , , , , , , , , , , , , }/); and /du4/ji4yu2/ (/{ , , , , , , }/{ , }/), respectively. In this case, if the system has the contextual information that the pairs \" (technique)-(covet)\" and \" (once)-(covet)\" are, respectively, meaningful noun-verb (NV) and adverb-verb (DV) word-pairs, then the ambiguous syllable-word section can be effectively resolved and the word-pairs \" (technique)-(covet)\" and \" (once)-(covet)\" of this syllable sequence can be correctly identified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "For the above case, if we look at the Sinica corpus [6] , the bigram frequencies of \" (covet)-(China)\" and \" (at)-(China)\" are 0 and 24, respectively. Therefore, by using a bigram model trained with the Sinica corpus, the forward syllable-word segmentation would conclude that the following word segmentation / / /, will be incorrect. In fact, if we use Microsoft Input Method Editor 2003 for Traditional Chinese (a trigram like STW product), the syllables of the above example will be converted to \" (once) (continue) (to) (China) (of) (making-wine) (technique).\" It is widely recognized that unseen event (\" -\") and over-weighting (\" -\") are two major problems of SLM systems [10, 11] . Practical SLM is either a bigram or a trigram model. As the above case shows, the meaningful word-pairs (or contextual information) \" (technique)-(covet)\" and \"", "cite_spans": [ { "start": 52, "end": 55, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 678, "end": 682, "text": "[10,", "ref_id": "BIBREF9" }, { "start": 683, "end": 686, "text": "11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(once)-(covet)\" can be used to overcome both the unseen event and over-weighting problems of SLM-based STW systems. In [29] , we showed that the knowledge of noun-verb event frame (NVEF) sense-pairs and their corresponding NVEF word-pairs (NVEF knowledge) are useful for effectively resolving word sense ambiguity with an accuracy of 93.7%. In [28] , we showed that a NVEF word-pair identifier with pre-collected NVEF knowledge can be used to obtain a tonal (syllables with four tones) STW accuracy of more than 99% for the NVEF related portion in Chinese.", "cite_spans": [ { "start": 119, "end": 123, "text": "[29]", "ref_id": "BIBREF27" }, { "start": 344, "end": 348, "text": "[28]", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The objective of this study is to illustrate the effectivness of meaningful noun-verb (NV), noun-noun (NN), verb-verb (VV), adjective-noun (AN) and adverb-verb (DV) word-pairs for solving Chinese STW conversion problems. We conduct STW experiments to show that the tonal and toneless STW accuracies of conventional SLM models and the commercial input products can be improved by using a meaningful word-pair identifier without a tuning process. In this paper, we use tonal to indicate the syllables input with four tones, such as \"niang4( ) jiu3( ) ji4( ) shu4( ),\" and toneless to indicate the syllables input without four tones, such as \"niang( ) jiu( ) ji( ) shu( ).\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The remainder of this paper is arranged as follows. In Section 2, we propose the method for auto-generating the meaningful word-pairs in Chinese based on [30, 32] , and a meaningful word-pair identifier to resolve homonym/segmentation ambiguities of STW conversion in Chinese. The meaningful word-pair identifier is based on pre-collected datasets of meaningful word-pairs. In Section 3, we present our STW experiment results and analysis. Finally, in Section 4, we give our conclusions and suggest some future research directions.", "cite_spans": [ { "start": 154, "end": 158, "text": "[30,", "ref_id": "BIBREF28" }, { "start": 159, "end": 162, "text": "32]", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To develop the meaningful word-pair (MWP) identifier, we selected Hownet [7] as our system's dictionary because it provides knowledge of Chinese words, word senses and part-of-speeches (POS). The Hownet dictionary used in this study contains 58,541 Chinese words, among which there are 33,264 nouns, 16,723 verbs, 8,872 adjectives and 882 adverbs.", "cite_spans": [ { "start": 73, "end": 76, "text": "[7]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Development of the Meaningful Word-Pair Identifier", "sec_num": "2." }, { "text": "In this system's dictionary, the syllable-word for each word is obtained by using the inverse phoneme-to-character system presented in [15] , while the word frequencies are computed according to a fixed-size United Daily News (UDN) 2001 corpus. The latter is a collection of 4,539,624 Chinese sentences extracted from articles on the United Daily News Website [25] from January 17, 2001 to December 30, 2001 . Table 1 shows the statistics of the number of articles per article class in this UDN 2001 corpus. ", "cite_spans": [ { "start": 135, "end": 139, "text": "[15]", "ref_id": "BIBREF14" }, { "start": 360, "end": 364, "text": "[25]", "ref_id": null }, { "start": 370, "end": 386, "text": "January 17, 2001", "ref_id": null }, { "start": 390, "end": 407, "text": "December 30, 2001", "ref_id": null } ], "ref_spans": [ { "start": 410, "end": 417, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Development of the Meaningful Word-Pair Identifier", "sec_num": "2." }, { "text": "In [32] , we propose an AUTO-NVEF system to auto-generate NVEF knowledge from in Chinese. It extracts NVEF knowledge from Chinese sentences by four major processes: (1) Segmentation checking;", "cite_spans": [ { "start": 3, "end": 7, "text": "[32]", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Generating the Meaningful Word-Pair", "sec_num": "2.1" }, { "text": "(2) Initial Part-of-Speech (IPOS) sequence generation; (3) NV knowledge generation; and (4) NVEF knowledge auto-confirmation. The details of the four processes can be found in [32] . Take the Chinese sentence \" (concert)/ (locale)/ (enter)/ (many)/ (audience members)\" as an example. For this sentence, AUTO-NVEF will generate two collections of NVEF knowledge:", "cite_spans": [ { "start": 176, "end": 180, "text": "[32]", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Generating the Meaningful Word-Pair", "sec_num": "2.1" }, { "text": "(locale)-(enter) and (audience members)-(enter). In [32] , we reported that AUTO-NVEF achieved 98.52% accuracy for news and 96.41% for specific text types, which included research reports, classical literature and modern literature. In addition, it automatically discovered over 400,000 NVEF word-pairs in the UDN 2001 corpus.", "cite_spans": [ { "start": 52, "end": 56, "text": "[32]", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Generating the Meaningful Word-Pair", "sec_num": "2.1" }, { "text": "Using AUTO-NVEF as the base, we extended the system into a meaningful word-pair (MWP) generation called AUTO-MWP. The steps of AUTO-MWP are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating the Meaningful Word-Pair", "sec_num": "2.1" }, { "text": "Step 1. Use AUTO-NVEF to generate NVEF word-pairs for the given Chinese sentence. AUTO-NVEF adopts a forward=backward maximum matching technique to perform word segmentation and a bigram-like model to perform POS tagging [32] . If no NVEF word-pairs are generated, go to Step 3. Step 2. According to the generated NVEF word-pairs and the word-segmented sentence with POS tagging from Step 1, the auto-generation methods of meaningful NN, VV, AN and DV word-pairs are:", "cite_spans": [ { "start": 221, "end": 225, "text": "[32]", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Generating the Meaningful Word-Pair", "sec_num": "2.1" }, { "text": "(1) Generation of NN word-pair. When the number of generated NVEF word-pairs is greater than 1, this sub-process will be triggered. If the nouns of two generated NVEF word-pairs share the same verb, the two nouns will be designated as a meaningful NN word-pair. Take the generated NVEF word-pairs of (locale)-(enter) and (audience members)-(enter) for the sentence \" (concert) (locale) (enter) (many) (audience members)\" as examples. The noun (locale) and the noun (audience members) are designated as a NN word-pair because the two nouns share the same verb (enter) in this sentence. (2) Generation of VV word-pair. When the number of generated NVEF word-pairs is greater than 1, this sub-process will be triggered. If the verbs of two generated NVEF word-pairs share the same noun, the two verbs will be designated as a meaningful VV word-pair. Take the generated NVEF word-pairs (the end of year)-(prearrange) and (the end of year)-(complete) for the sentence \" (whole) (construction) (prearrange) (the end of year) (complete)\" as examples. The verb (prearrange) and the verb (complete) are designated as a VV word-pair because the two verbs share the same noun (the end of year). (3) Generation of AN word-pair. For each noun of a generated NVEF word-pair, if the word immediately to its left is an adjective, the noun and the adjective are designated as one AN word-pair. Take the generated NVEF word-pair (audience members)-(enter) for the word-segmented and POS-tagged sentence \" (N) (N) (V) (ADJ) (N)\" as an example. Since the word immediately to the left of (audience members) is an adjective (many), the adjective (many) and the noun (audience members) are designated as a AN word-pair. (4) Generation of DV word-pair. For each verb of a generated NVEF word-pair, if the word immediately to its left is an adverb, the verb and the adverb are designated as one DV word-pair. Take the generated NVEF word-pair (price)-(maintain) for the word-segmented and POS-tagged sentence \" (N) (ADV) (V) (ADJ)\" as an example. Since the word immediately to the left of (maintain) is an adverb (ordinarily), the adverb (ordinarily) and the verb (maintain) are designated as a DV word-pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating the Meaningful Word-Pair", "sec_num": "2.1" }, { "text": "Step 3. Stop. Table 2 shows the number of generated NV, NN, VV, AN and DV word-pairs obtained by applying AUTO-MWP to the UDN 2001 corpus. The frequencies of all the generated meaningful word-pairs were computed by the UDN 2001 corpus. Note that the frequency of a meaningful word-pair is the number of sentences that contain the word-pair with the same word-pair order in the UDN 2001 corpus. Table 3 shows fifteen randomly selected NV, NN, VV, AN and DV word-pairs and their corresponding frequencies in the generated MWP datasets for the UDN 2001 corpus. ", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 21, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 394, "end": 401, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Generating the Meaningful Word-Pair", "sec_num": "2.1" }, { "text": "We developed a NVEF word-pair identifier [28] for Chinese syllable-to-word (STW) and achieved a tonal STW accuracy of more than 99% on the NVEF related portion. This NVEF word-pair identifier is based on the techniques of longest syllabic NVEF-word-pair first (LS-NVWF), exclusion-word-list (EWL) checking and pre-collected NVEF knowledge. By modifying the algorithm of this identifier in [28] , we obtain our meaningful word-pair (MWP) identifier, (Figure 1 ). In Figure 1 , the MWP data is a mixed collection of all auto-generated meaningful NV The algorithm of the MWP identifier is as follows:", "cite_spans": [ { "start": 41, "end": 45, "text": "[28]", "ref_id": "BIBREF26" }, { "start": 389, "end": 393, "text": "[28]", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 449, "end": 458, "text": "(Figure 1", "ref_id": "FIGREF0" }, { "start": 465, "end": 473, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Meaningful Word-Pair Identifier", "sec_num": "2.2" }, { "text": "Step 1. Input tonal (with four tones) or toneless (without four tones) syllables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meaningful Word-Pair Identifier", "sec_num": "2.2" }, { "text": "Step 2. Generate all possible word-pairs found in the input syllables. Exclude certain NV word-pairs based on EWL checking [28] . Appendix A lists all of the exclusion words used in this study. Note that our meaningful word-pairs include monosyllabic nouns/adjectives/adverbs and monosyllabic verbs, except \" (be)\" and \" (has/have)\" that are dropped in this Step.", "cite_spans": [ { "start": 123, "end": 127, "text": "[28]", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Meaningful Word-Pair Identifier", "sec_num": "2.2" }, { "text": "Step 3. Word-pairs that match a meaningful word-pair in the generated MWP data are used as the initial MWP set for the input syllables. From the initial MWP set, select a key word-pair and its co-occurring word-pairs to be the final MWP set. Conflicts are resolved using the longest syllabic word-pair first (LS-WPF) strategy. If there are two or more word-pairs with the same condition, the system triggers the following processes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meaningful Word-Pair Identifier", "sec_num": "2.2" }, { "text": "(1) The word-pair with the greatest frequency (the number of sentences that contain the word-pair with the same word-pair order in the UDN 2001 corpus) is selected as the key word-pair. If there are two or more word-pairs with the same frequency, one of them is randomly selected as the key word-pair. (2) The word-pairs that co-occur with the key word-pair in the UDN 2001 corpus are selected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meaningful Word-Pair Identifier", "sec_num": "2.2" }, { "text": "(3) The key and co-occurred word-pairs are then combined as the final MWP set for Step 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meaningful Word-Pair Identifier", "sec_num": "2.2" }, { "text": "Step 4. Arrange all word-pairs of the final MWP set into a MWP-sentence as shown in Table 3 . If no word-pairs can be identified from the input syllables, a null MWP-sentence is produced. The meaningful word-pairs found: (wen2 ming2)-(guo4 cheng2)/NN pair 3 (wen2 ming2)-(shuai1 wei2)/NV pair 1", "cite_spans": [], "ref_spans": [ { "start": 84, "end": 91, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Meaningful Word-Pair Identifier", "sec_num": "2.2" }, { "text": "Step. 3 The key meaningful word-pair: (wen2 ming2)-(guo4 cheng2)/NN pair The co-occurred word-pair:", "cite_spans": [ { "start": 6, "end": 7, "text": "3", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Meaningful Word-Pair Identifier", "sec_num": "2.2" }, { "text": "(wen2 ming2)-(shuai1 wei2)/NV pair Step. 4 MWP-sentence: yi1 ge5 de5 Table 3 is a step by step example that illustrates the four processes of our MWP identifier for the Chinese syllables \"yi1 ge5 wen2 ming2 de5 shuai1 wei2 guo4 cheng2( ", "cite_spans": [ { "start": 41, "end": 42, "text": "4", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 69, "end": 76, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Meaningful Word-Pair Identifier", "sec_num": "2.2" }, { "text": "To evaluate the STW performance of our MWP identifier, we define the STW accuracy, STW improvement, identified character ratio (ICR) by the following equations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The STW experiment", "sec_num": "3." }, { "text": "STW accuracy = # of correct characters / # of total characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The STW experiment", "sec_num": "3." }, { "text": "STW improvement (STW error reduction rate) = (accuracy of STW system with MWP -accuracy of STW system)) / (1 -accuracy of STW system).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The STW experiment", "sec_num": "3." }, { "text": "Identified character ratio (ICR) = # of characters of identified MWPs / # of total characters in testing sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The STW experiment", "sec_num": "3." }, { "text": "We use the inverse translator of the phoneme-to-character system in [15] to convert a test sentence into a syllable sequence. We then apply our MWP identifier to convert this syllable sequence back to characters and calculate its STW accuracy and identified character ratio by Equations (1) and (2) In this study, we conducted the STW experiment in a progressive manner. The results and analysis of the experiment are described in Sub-sections 3.2, 3.3 and 3.4. Appendix B presents two STW results that were obtained from the experiment.", "cite_spans": [ { "start": 68, "end": 72, "text": "[15]", "ref_id": "BIBREF14" }, { "start": 295, "end": 298, "text": "(2)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Closed Test Set and Open Test Set", "sec_num": "3.1" }, { "text": "The purpose of this experiment is to demonstrate the tonal and toneless STW accuracies by using the MWP identifier with the generated meaningful NV, NN, VV, AN, DV and (NV+NN+VV+AN+DV) datasets, respectively. Note that the symbol (NV+NN+VV+AN+DV) stands for a mixed collection of all auto-generated meaningful NV, NN, VV, AN and DV word-pairs. From Tables 4a and 4b, the average tonal and toneless STW accuracies of the MWP identifier with the MWP (NV+NN+VV+AN+DV) data for the closed and open test sets are 98.46% and 90.70%, respectively. Between the closed and the open test sets, the differences of the tonal and toneless STW accuracies of the MWP identifier with the (NV+NN+VV+AN+DV) data are 0.49% and 1.34%, respectively. These results strongly support our belief that meaningful word-pairs can be used as application independent knowledge to effectively convert Chinese STW on the MWP-related portion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STW Experiment for the MWP Identifier", "sec_num": "3.2" }, { "text": "We selected Microsoft Input Method Editor 2003 for Traditional Chinese (MSIME 2003) as our experimental commercial IME system. In addition, a bigram model called BiGram was developed. The BiGram STW system is a bigram model using Lidstone's law [23] , as well as forward and backward longest syllable-word first strategies. The system dictionary of the BiGram is comprised of CKIP lexicon and those unknown words found automatically in the UDN 2001 corpus by a Chinese word auto-confirmation (CWAC) system [31] . All the bigram probabilities were calculated by the UDN 2001 corpus.", "cite_spans": [ { "start": 245, "end": 249, "text": "[23]", "ref_id": "BIBREF22" }, { "start": 506, "end": 510, "text": "[31]", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "A Commercial IME System and A Bigram Model with MWP Identifier", "sec_num": "3.3" }, { "text": "MSIME 2003, which uses a statistical trigram-like model [24] , is one of the most widely available input methods. To sum up the results and observations of this experiment, we conclude that the MWP identifier can achieve better MWP-portion STW accuracy than the MSIME 2003 and BiGram STW systems. The results show that the MWP identifier can help both MSIME 2003 (trigram-like) and BiGram (bigram base) systems to increase their performances to achieve 96.30%/96.75% of tonal STW accuracies and 89.79%/87.74% of toneless STW accuracies, respectively. Furthermore, the results indicate that the meaningful word-pairs, or contextual information, can be used to effectively overcome the unseen event and over-weighting problems of SLM models in Chinese STW conversion.", "cite_spans": [ { "start": 56, "end": 60, "text": "[24]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Commercial IME System and A Bigram Model with MWP Identifier", "sec_num": "3.3" }, { "text": "We examine the Top 300 cases in the tonal and toneless STW conversion from the open testing results of BiGram with the MWP identifier and classify them according to the following three major types of error (see Table 6 ):", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 218, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis of STW Conversion", "sec_num": "3.4" }, { "text": "(1) Unknown words: For any NLP system, unknown word extraction is one of the most difficult problems [31] . Sinc proper names are major types of unknown words, we classify the cases of unknown words into two sub-types and calculate their corresponding percentages, as shown in Table 6 . (2) Inadequate syllable segmentation: When an error is caused by word overlapping, instead of an unknown word problem, we call it inadequate syllable segmentation. (3) Homophones: These are the remaining errors. Table 6 . Three major error types of tonal/toneless STW conversion.", "cite_spans": [ { "start": 101, "end": 105, "text": "[31]", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 277, "end": 284, "text": "Table 6", "ref_id": null }, { "start": 499, "end": 506, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis of STW Conversion", "sec_num": "3.4" }, { "text": "Sub-Types Percentage within this type (%) Examples Overall Percentage (%)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Types", "sec_num": null }, { "text": "From Table 6 , we make the following observations: (1) The percentages of unknown word errors for the tonal and toneless STW systems are similar. Since the unknown word problem is not specifically a STW problem, it can be easily taken care of through manual editing or semi-automatic learning during input. In practice, therefore, the tonal and toneless STW accuracies could be raised to 98% and 91%, respectively. However, even though unknown words of the first error type have been incorporated in the system dictionary, they could still face the problems of inadequate syllable segmentation or failed homophone disambiguation. (2) The major error types of tonal and toneless STW systems are different. To improve tonal STW systems, the major targets should be cases of failed homophone disambiguation. For toneless STW systems, on the other hand, cases of inadequate syllable segmentation should be the focus for improvement. To sum up the above observations, the bottlenecks of the STW conversion lie in the second and third error types. To resolve these issues, we believe one possible approach is to extend the size of MWP data to increase the identified MWP character ratio. This is because our experiment results show that the MWP identifier can achieve better tonal and toneless STW accuracies than those of BiGram and MSIME 2003 on the MWP-related portion (see the examples given in Appendix B).", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Types", "sec_num": null }, { "text": "In this paper, we have applied a MWP identifier to the Chinese STW conversion problem and obtained a high degree of STW accuracy on the MWP-related portion. All of the MWP data was generated fully automatically by using AUTO-MWP on the UDN 2001 corpus. The experiments on STW conversion in [28] and on WSD in [29] , as well as the STW experiments in this study, demonstrate that meaningful word-pairs (i.e. contextual information) are key linguistic features of NLP/NLU systems.", "cite_spans": [ { "start": 290, "end": 294, "text": "[28]", "ref_id": "BIBREF26" }, { "start": 309, "end": 313, "text": "[29]", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Directions for Future Research", "sec_num": "4." }, { "text": "We are encouraged by the fact that MWP knowledge can achieve tonal and toneless STW accuracies of 98.46% and 90.70%, respectively, for the MWP-related portion of the testing syllables. The MWP identifier can be easily integrated into existing STW conversion systems by identifying meaningful word-pairs in a post-processing step. Our experiment shows that, by applying the MWP identifier together with MSIME 2003 (a trigram-like model) and BiGram (an optimized bigram model), the tonal and toneless STW improvements are 25.25%/21.82% and 12.87%/15.62%, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Directions for Future Research", "sec_num": "4." }, { "text": "Currently, our approach is quite basic when more than one MWP occurs in the same sentence (Step 3 in Section 2.2). Although there is room for improvement, we believe it would not produce a noticeable effect as far as the STW accuracy is concerned. However, this issue will become important as we apply the MWP knowledge to parsing or speech understanding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Directions for Future Research", "sec_num": "4." }, { "text": "The MWP-based approach has the potential to provide the following information for a given syllable sequence: (1) better word segmentation; and (2) MWP-sentence including the information of five types of MWPs. Such information will be useful for general NLP and NLU systems, especially for syllable/speech understanding and full/shallow parsing. According to our computations, the collection of MWP knowledge can cover approximately 50% of the characters in the UDN 2001 corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Directions for Future Research", "sec_num": "4." }, { "text": "We will continue to expand our collection of MWP knowledge to cover more characters in the UDN 2001 corpus. In other directions, we will try to improve our MWP-based STW conversion with other statistical language models, such as HMM, and extend it to other areas of NLP, especially Chinese shallow parsing and syllable/speech understanding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Directions for Future Research", "sec_num": "4." }, { "text": "This project was supported in part by Chinese Multimedia Information Retrieval System ( ) under an excellent Grant AS-91-TP-A09, Research Center for Humanities and Social Sciences, Academia Sinica,and National Science Council under a Center Excellence Grant NSC93-2752-E-001-001-PAE. We would like to thank Prof. Zhen-Dong Dong for providing us with the Hownet dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "5." } ], "back_matter": [ { "text": "Appendix B. Two tonal and toneless STW results used in this study (The pinyin symbols and English words in parentheses are included for explanatory purposes only) I. Tonal STW results for the Chinese tonal syllable input \"yi3 li4 gong1 ke4 guan1 ka3\" of the Chinese sentence \" \"Toneless STW results for the Chinese toneless syllable input \"yi li gong ke guan ka\" of the Chinese sentence \" \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I. Monosyllabic exclusion words", "sec_num": null }, { "text": "Tonal STW results for the Chinese tonal syllable input \"you2 qi2 zai4 cheng2 shou2 qi2 dao4 gu3 bao3 shi2 lv4 bu4 jia1\" of the Chinese sentence \" \"Toneless STW results for the Chinese toneless syllable input \"you qi zai cheng shou qi dao gu bao shi lv4 bu jia\" of the Chinese sentence \" \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Conversion of Phonemic-Input to Chinese Text Through Constraint Satisfaction", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" }, { "first": "S", "middle": [ "D" ], "last": "Chern", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Chen", "suffix": "" } ], "year": 1991, "venue": "Proceedings of ICCPOL'91", "volume": "", "issue": "", "pages": "30--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, J.S., S.D. Chern and C.D. Chen. Conversion of Phonemic-Input to Chinese Text Through Constraint Satisfaction. Proceedings of ICCPOL'91, pp. 30-36, 1991.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A model for Lexical Analysis and Parsing of Chinese Sentences", "authors": [ { "first": "C", "middle": [ "G" ], "last": "Chen", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Chen", "suffix": "" }, { "first": "L", "middle": [ "S" ], "last": "Lee", "suffix": "" } ], "year": 1986, "venue": "Proceedings of 1986 International Conference on Chinese Computing", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, C.G., K.J. Chen and L.S. Lee. A model for Lexical Analysis and Parsing of Chinese Sentences. Pro- ceedings of 1986 International Conference on Chinese Computing, Singapore, pp. 33-40, 1986.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Retrieval of broadcast news speech in Mandarin Chinese collected in Taiwan using syllable-level statistical characteristics", "authors": [ { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "H", "middle": [ "M" ], "last": "Wang", "suffix": "" }, { "first": "L", "middle": [ "S" ], "last": "Lee", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 2000 International Conference on Acoustics Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, B., H.M. Wang and L.S. Lee. Retrieval of broadcast news speech in Mandarin Chinese collected in Taiwan using syllable-level statistical characteristics. Proceedings of the 2000 International Conference on Acoustics Speech and Signal Processing, 2000.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Conversion of Chinese Phonetic Symbols to Characters", "authors": [ { "first": "K", "middle": [ "H" ], "last": "Chung", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chung, K.H. Conversion of Chinese Phonetic Symbols to Characters. M. Phil. thesis, Department of Com- puter Science, Hong Kong University of Science and Technology, Sept. 1993.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "the content and illustration of Sinica corpus of Academia Sinica", "authors": [], "year": 1995, "venue": "CKIP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "CKIP. Technical Report no. 95-02. the content and illustration of Sinica corpus of Academia Sinica. Institute of Information Science, Academia Sinica, http://godel.iis.sinica.edu.tw/CKIP/r_content.html, 1995.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Chinese Character Processing system based on character-root combination and graphics processing", "authors": [ { "first": "C", "middle": [], "last": "Fan", "suffix": "" }, { "first": "P", "middle": [], "last": "Zini", "suffix": "" } ], "year": 1988, "venue": "Proc. of the Int. Conf. on Electronic Publishing, Doc. Manipulation and Typography", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fan, C. and P. Zini. Chinese Character Processing system based on character-root combination and graphics processing. Document Manipulation and Typography, Proc. of the Int. Conf. on Electronic Publishing, Doc. Manipulation and Typography, Nice (France), Cambridge University Press, 1988.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Word Segmentation for Chinese Phonetic Symbols", "authors": [ { "first": "L", "middle": [ "A" ], "last": "Fong", "suffix": "" }, { "first": "K", "middle": [ "H" ], "last": "Chung", "suffix": "" } ], "year": 1994, "venue": "Proceedings of International Computer Symposium", "volume": "", "issue": "", "pages": "911--916", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fong, L.A. and K.H. Chung. Word Segmentation for Chinese Phonetic Symbols. Proceedings of International Computer Symposium, pp. 911-916, 1994.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Survey on Chinese Speech Recognition", "authors": [ { "first": "S", "middle": [ "W K" ], "last": "Fu", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Lee", "suffix": "" }, { "first": "Orville", "middle": [ "L C" ], "last": "", "suffix": "" } ], "year": 1996, "venue": "Communications of COLIPS", "volume": "6", "issue": "1", "pages": "1--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fu, S.W.K, C.H. Lee and Orville L.C. A Survey on Chinese Speech Recognition. Communications of COLIPS 6 (1), pp.1-17, 1996.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Toward a Unified Approach to Statistical Language Modeling for Chinese", "authors": [ { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "J", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "M", "middle": [], "last": "Li", "suffix": "" }, { "first": "K", "middle": [ "F" ], "last": "Lee", "suffix": "" } ], "year": 2002, "venue": "ACM Transactions on Asian Language Information Processing", "volume": "1", "issue": "1", "pages": "3--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gao, J, J. Goodman, M. Li and K.F. Lee. Toward a Unified Approach to Statistical Language Modeling for Chinese. ACM Transactions on Asian Language Information Processing 1(1), pp. 3-33, 2002.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Markov modeling of mandarin Chinese for decoding the phonetic sequence into Chinese characters", "authors": [ { "first": "H", "middle": [ "Y" ], "last": "Gu", "suffix": "" }, { "first": "C", "middle": [ "Y" ], "last": "Tseng", "suffix": "" }, { "first": "L", "middle": [ "S" ], "last": "Lee", "suffix": "" } ], "year": 1991, "venue": "Computer Speech and Language", "volume": "5", "issue": "4", "pages": "363--377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gu, H.Y., C.Y. Tseng and L.S. Lee. Markov modeling of mandarin Chinese for decoding the phonetic se- quence into Chinese characters. Computer Speech and Language 5(4), pp.363-377, 1991.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Integrating long-distance language modeling to phonetic-to-text conversion", "authors": [ { "first": "T", "middle": [ "H" ], "last": "Ho", "suffix": "" }, { "first": "K", "middle": [ "C" ], "last": "Yang", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Lin", "suffix": "" }, { "first": "L", "middle": [ "S" ], "last": "Lee", "suffix": "" } ], "year": 1997, "venue": "Proceedings of ROCLING X International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "287--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ho, T.H., K.C. Yang, J.S. Lin and L.S. Lee. Integrating long-distance language modeling to phonetic-to-text conversion. Proceedings of ROCLING X International Conference on Computational Linguistics, pp. 287-299, 1997.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The Semantic Analysis in GOING -An Intelligent Chinese Input System", "authors": [ { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Chen", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the Second Joint Conference of Computational Linguistics, Shiamen", "volume": "", "issue": "", "pages": "338--343", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsu, W.L. and K.J. Chen. The Semantic Analysis in GOING -An Intelligent Chinese Input System. Proceed- ings of the Second Joint Conference of Computational Linguistics, Shiamen, pp. 338-343, 1993.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Chinese parsing in a phoneme-to-character conversion system based on semantic pattern matching", "authors": [ { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" } ], "year": 1994, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "8", "issue": "2", "pages": "227--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsu, W.L. Chinese parsing in a phoneme-to-character conversion system based on semantic pattern matching. Computer Processing of Chinese and Oriental Languages 8(2), pp. 227-236, 1994.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "On Phoneme-to-Character Conversion Systems in Chinese Processing", "authors": [ { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" }, { "first": "Y", "middle": [ "S" ], "last": "Chen", "suffix": "" } ], "year": 1999, "venue": "Journal of Chinese Institute of Engineers", "volume": "5", "issue": "", "pages": "573--579", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsu, W.L. and Y.S. Chen. On Phoneme-to-Character Conversion Systems in Chinese Processing. Journal of Chinese Institute of Engineers, 5, pp. 573-579, 1999.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The Input and Output of Chinese and Japanese Characters", "authors": [ { "first": "J", "middle": [ "K" ], "last": "Huang", "suffix": "" } ], "year": 1985, "venue": "IEEE Computer", "volume": "18", "issue": "1", "pages": "18--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, J.K. The Input and Output of Chinese and Japanese Characters. IEEE Computer 18(1), pp. 18-24, 1985.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Phonetic-input-to-character conversion system for Chinese using syntactic connection table and semantic distance", "authors": [ { "first": "J", "middle": [ "J" ], "last": "Kuo", "suffix": "" } ], "year": 1995, "venue": "Computer Processing and Oriental Languages", "volume": "10", "issue": "2", "pages": "195--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuo, J.J. Phonetic-input-to-character conversion system for Chinese using syntactic connection table and se- mantic distance. Computer Processing and Oriental Languages 10(2), pp. 195-210, 1995.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A perturbation technique for handling handwriting variations faced in stroke-based Chinese character classification", "authors": [ { "first": "C", "middle": [ "W" ], "last": "Lee", "suffix": "" }, { "first": "Z", "middle": [], "last": "Chen", "suffix": "" }, { "first": "R", "middle": [ "H" ], "last": "Cheng", "suffix": "" } ], "year": 1997, "venue": "Computer Processing of Oriental Languages", "volume": "10", "issue": "3", "pages": "259--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, C.W., Z. Chen and R.H. Cheng. A perturbation technique for handling handwriting variations faced in stroke-based Chinese character classification. Computer Processing of Oriental Languages 10(3), pp. 259-280, 1997.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Task adaptation in Stochastic Language Model for Chinese Homophone Disambiguation", "authors": [ { "first": "Y", "middle": [ "S" ], "last": "Lee", "suffix": "" } ], "year": 2003, "venue": "ACM Transactions on Asian Language Information Processing", "volume": "2", "issue": "1", "pages": "49--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, Y.S. Task adaptation in Stochastic Language Model for Chinese Homophone Disambiguation. ACM Transactions on Asian Language Information Processing 2(1), pp. 49-62, 2003.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Removing the ambiguity of phonetic Chinese input by the relaxation technique", "authors": [ { "first": "M", "middle": [ "Y" ], "last": "Lin", "suffix": "" }, { "first": "W", "middle": [ "H" ], "last": "Tasi", "suffix": "" } ], "year": 1987, "venue": "Computer Processing and Oriental Languages", "volume": "3", "issue": "1", "pages": "1--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, M.Y. and W.H. Tasi. Removing the ambiguity of phonetic Chinese input by the relaxation technique. Computer Processing and Oriental Languages 3(1), pp. 1-24, 1987.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A Touch-Typing Pinyin Input System. Computer Processing of Chinese and Oriental Languages", "authors": [ { "first": "K", "middle": [ "T" ], "last": "Lua", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Gan", "suffix": "" } ], "year": 1992, "venue": "", "volume": "6", "issue": "", "pages": "85--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lua, K.T. and K.W. Gan. A Touch-Typing Pinyin Input System. Computer Processing of Chinese and Orien- tal Languages, 6, pp. 85-94, 1992.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Fundations of Statistical Natural Language Processing", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "H", "middle": [], "last": "Schuetze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "191--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, C. D. and Schuetze, H. Fundations of Statistical Natural Language Processing, MIT Press, pp.191-220, 1999.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Six-Digit Coding Method", "authors": [ { "first": "J", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Y", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "S", "middle": [], "last": "Qiao", "suffix": "" } ], "year": 1984, "venue": "Commun. ACM", "volume": "33", "issue": "5", "pages": "248--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiao, J., Y. Qiao and S. Qiao. Six-Digit Coding Method. Commun. ACM 33(5), pp. 248-267, 1984.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "An Application of Statistical Optimization with Dynamic Programming to Phonemic-Input-to-Character Conversion for Chinese", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 1990, "venue": "Proceedings of ROCLING III", "volume": "", "issue": "", "pages": "379--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, R. An Application of Statistical Optimization with Dynamic Programming to Phone- mic-Input-to-Character Conversion for Chinese. Proceedings of ROCLING III, pp. 379-390, 1990.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Applying an NVEF Word-Pair Identifier to the Chinese Syllable-to-Word Conversion Problem", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Tsai", "suffix": "" }, { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of 19 th COLING", "volume": "", "issue": "", "pages": "1016--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsai, J.L. and W.L. Hsu. Applying an NVEF Word-Pair Identifier to the Chinese Syllable-to-Word Conversion Problem. Proceedings of 19 th COLING 2002, Taipei, pp.1016-1022, 2002.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Word Sense Disambiguation and Sense-based NV Event-Frame Identifier", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Tsai", "suffix": "" }, { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" }, { "first": "J", "middle": [ "W" ], "last": "Su", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "7", "issue": "", "pages": "29--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsai, J.L, W.L. Hsu and J.W. Su. Word Sense Disambiguation and Sense-based NV Event-Frame Identifier. Computational Linguistics and Chinese Language Processing 7(1), pp.29-46, 2002.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Auto-Discovery of NVEF word-pairs in Chinese", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Tsai", "suffix": "" }, { "first": "G", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ROCOLING XV", "volume": "", "issue": "", "pages": "143--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsai, J.L, G. Hsieh and W.L. Hsu. Auto-Discovery of NVEF word-pairs in Chinese. Proceedings of ROCOLING XV, pp.143-160, 2003.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Chinese Word Auto-Confirmation Agent", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Tsai", "suffix": "" }, { "first": "C", "middle": [ "L" ], "last": "Sung", "suffix": "" }, { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ROCOLING XV", "volume": "", "issue": "", "pages": "175--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsai, J.L, C.L. Sung and W.L. Hsu. Chinese Word Auto-Confirmation Agent. Proceedings of ROCOLING XV, pp.175-192, 2003.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Auto-Generation of NVEF knowledge in Chinese", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Tsai", "suffix": "" }, { "first": "G", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "W", "middle": [ "L" ], "last": "Hsu", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "9", "issue": "", "pages": "41--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsai, J.L, G. Hsieh and W.L. Hsu. Auto-Generation of NVEF knowledge in Chinese. Computational Linguis- tics and Chinese Language Processing 9(1), pp.41-64, 2004.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": ", NN, VV, AN and DV word-pairs. As shown in the figure, if the MWP identifier only uses one of the meaningful NV, NN, VV, AN or DV word-pair datasets, it will naturally become an MNV, MNN, MVV, MAN or MDV word-pair identifier. A system overview of the meaningful word-pair (MWP) identifier.", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "[a] [civilization][of] [decay] [process]).\" When we used MSIME 2003 to convert the same syllables, the output was \" (one) (famous) (of) (decay) (process).\" Obviously, the over-weighted bigram \" -(wen2 ming2-de5)\" causes an STW error in MISIME 2003, which uses a statistical language model (SLM) with a trigram-like Chinese input product [24]. If we use the MWP-sentence shown in Step 4 to directly replace the corresponding characters of the MSIME 2003 output in this example, the error converted word \" (famous)\", caused by the over-weighting of MSIME 2003, becomes the correct word \" (civilization).\"", "uris": null }, "TABREF0": { "text": "The number of articles per article class in the training corpus.", "content": "
article class
ChinaLocalSocietyStockPoliticsScienceTravel
# of articles9026,84313619,6991335,8706,183
article class
ConsumptionFinancialWorldSportEntertainmentHealthArts
# of articles1249823,5637,40412,40418,6745,6539,989
", "type_str": "table", "num": null, "html": null }, "TABREF1": { "text": "The number of generated NV, NN, VV, AN and DV word-pairs obtained by applying AUTO-MWP to the UDN 2001 corpus.", "content": "
NVNNVVANDVTotal
430,698533,780220,022138,055111,8791,434,434
", "type_str": "table", "num": null, "html": null }, "TABREF2": { "text": "Fifteen randomly selected examples of meaningful NV, NN, VV, AN and DV word-pairs and their corresponding frequencies from the generated MWP datasets for the UDN 2001 corpus.", "content": "
NVNNVVANDV
-/118-/83-/541-/206-/188
-/35-/103-/1483-/103-/390
-/96-/107-/124-/129-/144
", "type_str": "table", "num": null, "html": null }, "TABREF3": { "text": "An illustration of an MWP-sentence for the Chinese syllables \"", "content": "
yi1 ge5 wen2 ming2 de5 shuai1 wei2 guo4
", "type_str": "table", "num": null, "html": null }, "TABREF4": { "text": ". All test sentences are composed of a string of Chinese characters. In following experiments, the training/testing corpus, closed/open test sets and the collection of MWPs were: Training corpus: We used the UDN 2001 corpus mentioned in Section 2 as our training corpus. All knowledge of word frequencies, meaningful word-pairs, MWP frequencies was auto-generated and computed by this corpus. 10,000 sentences were randomly selected from the UDN 2001 corpus as the closed test set. The {minimum, maximum, and mean} of characters per sentence for the closed test set were {4, 37, and 12}. Open test set: 10,000 sentences were randomly selected from the UDN 2002 corpus as the open test set. At this point, we checked that the selected open test sentences were not in the closed test set as well. The {minimum, maximum, and mean} of characters per sentence for the open test set were {4, 43, and 13.7}. By applying our AUTO-MWP on the UDN 2001 corpus, we created 430,698 NV, 533,780 NN, 220,022 VV, 138,055 AN and 111,879 DV word-pairs as the MWP testing data.", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF5": { "text": "The results of the tonal STW experiment for the MWP identifier with NV, NN, VV, AN, DV and The results of the toneless STW experiment for the MWP identifier with NV, NN, VV, AN, DV and (NV+NN+VV+AN+DV) word-pairs.", "content": "
(NV+NN+VV+AN+DV) word-pairs.
ClosedOpenAverage (identified character ratio)
NV99.08%98.70%98.90% (21.69%)
NN98.54%98.30%98.43% (34.56%)
VV98.25%97.25%97.81% (14.64%)
AN97.41%96.83%97.14% (10.07%)
DV98.07%97.45%97.80% (9.46%)
(NV+NN+VV+AN+DV)98.69%98.20%98.46% (46.67%)
ClosedOpenAverage (identified character ratio)
NV91.53%90.03%91.01% (24.46%)
NN91.41%89.82%90.92% (27.79%)
VV88.80%86.96%87.67% (12.20%)
AN88.00%86.04%86.89% (10.67%)
DV88.98%86.51%88.03% (10.03%)
(NV+NN+VV+AN+DV)91.33%89.99%90.70% (38.63%)
", "type_str": "table", "num": null, "html": null }, "TABREF6": { "text": "compares the results of MSIME 2003, and MSIME 2003 with the MWP identifier on the closed and open test sentences.Table 5b compares the results of BiGram, and BiGram with the MWP identifier on the closed and open test sentences. In this experiment, the STW output of MSIME with the MWP identifier, or BiGram with the MWP identifier, was collected by directly replacing the identified meaningful word-pairs from the corresponding STW output of MSIME or BiGram. From Table 5a, the tonal STW improvements of MSIME and BiGram by using the MWP identifier are 25.25% and 12.87%, respectively. Meanwhile, from Table 5b, the toneless STW improvements of MSIME and BiGram by using the MWP identifier are 21.82% and 15.62%, respectively. The results of tonal STW experiments for closed and open test sentences, using MSIME, BiGram, MSIME with MWP identifier and BiGram with MWP identifier. STW accuracies of the words identified by the Microsoft Input Method Editor (MSIME) 2003 and the BiGram b STW improvements of the words identified by the MSIME 2003 with the MWP identifier and the BiGram with the MWP identifier", "content": "
Identified-wordMSIME aBiGram aMSIME + MWP bBiGram + MWP b
MWP portion96.87%97.29%--
Overall95.05%96.27%25.25%12.87%
", "type_str": "table", "num": null, "html": null }, "TABREF7": { "text": "The results of toneless STW experiments for closed and open test sentences, using MSIME, BiGram, MSIME with MWP identifier and BiGram with MWP identifier.", "content": "
Identified-wordMSIME aBiGram aMSIME + MWP bBiGram + MWP b
MWP portion89.40%89.23%--
Overall86.94%85.47%21.82%15.62%
", "type_str": "table", "num": null, "html": null } } } }