{ "paper_id": "O03-5003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:01:30.470479Z" }, "title": "Building A Chinese WordNet Via Class-Based Translation Model", "authors": [ { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University 101", "location": { "addrLine": "Sec. 2, Kuang Fu Road", "settlement": "Hsinchu", "country": "Taiwan, ROC" } }, "email": "jschang@cs.nthu.edu.tw" }, { "first": "Tracy", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiao Tung University", "location": { "addrLine": "1001, University Road", "settlement": "Hsinchu", "country": "Taiwan, ROC" } }, "email": "tracylin@mail.nctu.edu.tw" }, { "first": "Geeng-Neng", "middle": [], "last": "You", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taichung Institute of Technology", "location": { "addrLine": "San Ming Road", "settlement": "Taichung", "country": "Taiwan, ROC" } }, "email": "" }, { "first": "Thomas", "middle": [ "C" ], "last": "Chuang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiao Tung University", "location": { "addrLine": "1001, University Road", "settlement": "Hsinchu", "country": "Taiwan, ROC" } }, "email": "tomchuang@cc.vit.edu.tw" }, { "first": "Ching-Ting", "middle": [], "last": "Hsieh", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiao Tung University", "location": { "addrLine": "1001, University Road", "settlement": "Hsinchu", "country": "Taiwan, ROC" } }, "email": "chingting@ptl.com.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Semantic lexicons are indispensable to research in lexical semantics and word sense disambiguation (WSD). For the study of WSD for English text, researchers have been using different kinds of lexicographic resources, including machine readable dictionaries (MRDs), machine readable thesauri, and bilingual corpora. In recent years, WordNet has become the most widely used resource for the study of WSD and lexical semantics in general. This paper describes the Class-Based Translation Model and its application in assigning translations to nominal senses in WordNet in order to build a prototype Chinese WordNet. Experiments and evaluations show that the proposed approach can potentially be adopted to speed up the construction of WordNet for Chinese and other languages.", "pdf_parse": { "paper_id": "O03-5003", "_pdf_hash": "", "abstract": [ { "text": "Semantic lexicons are indispensable to research in lexical semantics and word sense disambiguation (WSD). For the study of WSD for English text, researchers have been using different kinds of lexicographic resources, including machine readable dictionaries (MRDs), machine readable thesauri, and bilingual corpora. In recent years, WordNet has become the most widely used resource for the study of WSD and lexical semantics in general. This paper describes the Class-Based Translation Model and its application in assigning translations to nominal senses in WordNet in order to build a prototype Chinese WordNet. Experiments and evaluations show that the proposed approach can potentially be adopted to speed up the construction of WordNet for Chinese and other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "WordNet has received widespread interest since its introduction in 1990 [Miller 1990 ]. As a large-scale semantic lexical database, WordNet covers a large vocabulary, similar to a typical college dictionary, but its information is organized differently. The synonymous word senses are grouped into so-called synsets. Noun senses are further organized into a deep IS-A hierarchy. The database also contains many semantic relations, including hypernyms, hyponyms, holonyms, meronyms, etc. WordNet has been applied in a wide range of studies on such topics as word sense disambiguation [Towell and Voothees, 1998; Mihalcea and Moldovan, 1999] , information retrieval [Pasca and Harabagiu, 2001] , and computer-assisted language learning [Wible and Liu, 2001] .", "cite_spans": [ { "start": 72, "end": 84, "text": "[Miller 1990", "ref_id": "BIBREF4" }, { "start": 583, "end": 610, "text": "[Towell and Voothees, 1998;", "ref_id": "BIBREF7" }, { "start": 611, "end": 639, "text": "Mihalcea and Moldovan, 1999]", "ref_id": "BIBREF3" }, { "start": 664, "end": 691, "text": "[Pasca and Harabagiu, 2001]", "ref_id": "BIBREF5" }, { "start": 734, "end": 755, "text": "[Wible and Liu, 2001]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Thus, there is a universally shared interest in the construction of WordNet in different languages. However, constructing a WordNet for a new language is a formidable task. To exploit the resources of WordNet for other languages, researchers have begun to study ways of speeding up the construction of WordNet for many European languages [Vossen, Diez-Orzas, and Peters, 1997] . One of many ways to build a WordNet for a language other than English is to associate WordNet senses with appropriate translations. Many researchers have proposed using existing monolingual and bilingual Machine Readable Dictionaries (MRD) with an emphasis on nouns [Daude, Padro & Rigau, 1999] . Very little study has been done on using corpora or on covering other parts of speech, including adjectives, verbs, and adverbs. In this paper, we describe a new method for automating the process of constructing Chinese WordNet.", "cite_spans": [ { "start": 338, "end": 376, "text": "[Vossen, Diez-Orzas, and Peters, 1997]", "ref_id": "BIBREF8" }, { "start": 645, "end": 673, "text": "[Daude, Padro & Rigau, 1999]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The method was developed specifically for nouns and is capable of assigning Chinese translations to some 20,000 nominal synsets in WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of this paper is divided into four sections. The next section provides the background on using a bilingual dictionary to build a Chinese WordNet and semantic concordance. Section 3 describes a class-based translation model for assigning translations to WordNet senses. Section 4 describes the experimental setup and results. A conclusion is provided in Section 5 along with directions of future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this section, we describe the proposed method for automating the construction process of a Chinese WordNet. We have experimented to find the simplest way of attaching an appropriate translation to each WordNet sense under a Class-Based Translation Model. The translation candidates are taken from a bilingual word list or Machine Readable Dictionaries (MRDs). We will use an example to show the idea, and a formal description will follow in Section 3. Let us consider the example of assigning appropriate translations for the nominal senses of \"plant\" in WordNet 1.7.1. The noun \"plant\" in WordNet has four senses:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "1. plant, works, industrial plant (buildings for carrying on industrial labor);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "2. plant, flora, plant life (a living organism lacking the power of locomotion);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "3. plant (something planted secretly for discovery by another person); 4. plant (an actor situated in the audience whose acting is rehearsed but seems spontaneous to the audience).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "The following translations are listed for the noun \"plant\" in the Longman Dictionary of Contemporary English (English-Chinese Edition) [Longman Group 1992]:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "1. , 2. , 3. , 4. , 5. , and 6. .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "For words such as \"plant\" with multiple senses and translations, the question arises:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "Which translation goes with which synset? We make the following observations that are crucial to the solution of the problem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "1. Each nominal synset has a chain of hypernyms which give ever more general concepts of the word sense. For instance, plant-1 is a building complex, which in turn is a structure and so on and so forth, while plant-2 can be generalized as a life form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "2. The hyponyms of a certain top concept in WordNet form a set of semantically related word senses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "3. Semantically related senses tend to have surface realization in Chinese with shared characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "For instance, building complex spawns the hyponyms factory, mill, assembly plant, cannery, foundry, maquiladora, etc., all of which realize in Chinese using the characters \" \" or \" .\" Therefore, we can say that there is a high probability that senses which are direct or indirect hyponyms of building complex share the Chinese characters \" \" and \" \" in their Chinese translations. Therefore, it is clear that one can determine that plant-1, a hyponym of building complex, should have \" \" instead of \" \" as its translation. See Table 1 for more examples. That intuition can be expanded into a systematic way of assigning the most appropriate translation to a given word sense. Figure 1 shows how the method works for four senses of plant.", "cite_spans": [], "ref_spans": [ { "start": 527, "end": 534, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 676, "end": 684, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "In the following, we will consider the task of assigning the most appropriate translation to plant-1, the first sense of the noun \"plant.\" First, the system looks up \"plant\" in the Translation Table (T Table) for candidate translations of plant-1:", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 208, "text": "Table (T Table)", "ref_id": null } ], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "(plant, ), (plant, ), (plant, ), (plant, ), (plant, ), (plant, ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "Next, the semantic class g to which plant-1 belongs is determined by consulting the Semantic Class Table (SC Table) . In this study we use some ", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 115, "text": "Table (SC Table)", "ref_id": null } ], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "g = N001004003030.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "Subsequently, the system evaluates the probabilities of each translation conditioned on the semantic class g:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "P(\" \" | N001004003030), P(\" \" | N001004003030), P(\" \" | N001004003030), P(\" \" | N001004003030), P(\" \" | N001004003030), P(\" \" | N001004003030).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "These probabilities are not evaluated directly. The system takes apart the characters in a translation and looks up P( u | g ), the probabilities for each translation character u conditioned on g: Note that to deal with lookup failure, a smoothing probability is given (0.000025, derived using the Good-Turing method). By using a statistical estimate based on simple linear interpolation, we can get", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "P(\" \" | N001004003030) = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "P(\" \" | plant-1) \u2248 P (\" \" | N001004003030) \u2248 2 1 P(\" \" | N001004003030) + 2 1 P(\" \" | N001004003030) = 2 1 (0.0178+0.0073) = 0.0124.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "Similarly, we have P(\" \" | N001004003030) = 0.0013, P(\" \" | N001004003030) = 0.0023, P(\" \" | N001004003030) = 0.0028, P(\" \" | N001004003030) = 0.0014, P(\" \" | N001004003030) = 0.0001.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "Finally, by choosing the translation with the highest probabilistic value for g, we can get an entry for Chinese WordNet (CWN Table) :", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 132, "text": "Table)", "ref_id": null } ], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "(plant, , n, 1, \"buildings for carrying on industrial labor\")", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "After we get the correct translation of plant-1 and many other word senses in g, we will be able to re-estimate the class-based translation probability for g and produce a new CT Table. However, the reader may wonder how we can get the initial CT Table. This dilemma can be resolved by adopting an iterative algorithm that establishes an initial CT Table and makes revision until the values in the CT Table converge . More details will be provided in Section 3.", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 185, "text": "Table.", "ref_id": null }, { "start": 247, "end": 253, "text": "Table.", "ref_id": null }, { "start": 349, "end": 364, "text": "Table and makes", "ref_id": null }, { "start": 401, "end": 415, "text": "Table converge", "ref_id": null } ], "eq_spans": [], "section": "From Bilingual MRD and Corpus to Bilingual Semantic Database", "sec_num": "2." }, { "text": "Translation Model and how the model can be trained iteratively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fig. 1 Using CBTM to build Chinese WordNet. This example shows how the first sense of plant receives an appropriate translation via the Class-Based", "sec_num": null }, { "text": "In this section, we will formally describe the proposed class-based translation model, how it can be trained, and how it can be applied to the task of assigning appropriate translations to different word senses. Given E k , the kth sense of an English word E in the WordNet, the probability of its Chinese translation is denoted as P( C | E k ). Therefore, the best Chinese ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") | ( max arg ) ( k ) ( k * E C P E C E T C \u2208 \u2245 ,", "eq_num": "(1)" } ], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "where T(X) is the set of Chinese translations of sense X listed in a bilingual dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "Based on our observation that semantically related senses tend to be realized in Chinese using shared Chinese characters, we tie together the probability functions of translation words in the same semantic class and use the class-based probability as an approximation. Thus, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") | ( ) | ( k g C P E C P \u2245 ,", "eq_num": "(2)" } ], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "where g = g(E k ) is the semantic class containing E k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "The probability of P(C|g) can be estimated using the Expectation and Maximization Algorithm as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "(Initialization) m E C P 1 ) | ( k = , m = | T(E) | and C \u2208 T(E);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(Maximization) \u2211 \u2211 \u2208 \u2208 = = i k E i k E g E I E C P g E I C C I E C P g C P , , k k i , , k i k i ) ( ) | ( ) ( ) ( ) | ( ) | ( ,", "eq_num": "(4)" } ], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "C i = the ith translation of E k in T(E k ) , I(x) = 1 if", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "x is true and 0 otherwise;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(Expectation) ) | ( ) | ( k 1 g C P E C P = ,", "eq_num": "(5)" } ], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "where g = g (E k ) is the class that contains E k ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "(Normalization)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2211 \u2208 = ) ( k 1 k 1 k k ) | ( ) | ( ) | ( E T D E D P E C P E C P .", "eq_num": "(6)" } ], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "In order to avoid the problem of data sparseness, P(C|g) is estimated indirectly via the unigrams and bigrams in C. We also weigh the contribution of each unigram and bigram to avoid the domination of a particular character in the semantic class. Therefore, we rewrite Equations 4 and 5 as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(Maximization) \u2211 \u2211 \u2208 = \u2208 = j i k E j i k E u E u P g E I m E u P u u I g E I m g u P , , , k j i, k , , , k j i, j i, k ) | ( ) ( 1 ) | ( ) ( ) ( 1 ) | ( ,", "eq_num": "(4a)" } ], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "where u i,j = the jth unigram of the ith translation in T(E k ) , m = the number of characters in the ith", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "translation in T(E k ), \u2211 \u2211 \u2208 \u2212 = \u2208 \u2212 = j i k E j i k E b E b P g E I m E b P b b I g E I m g b P , , , k j i, k , , , k j i, j i, k ) | ( ) ( 1 1 ) | ( ) ( ) ( 1 1 ) | ( ,", "eq_num": "(4b)" } ], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "where b i,j = the jth overlapping bigram of the ith translation in T(E k );", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "(Expectation)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2211 = \u2245 \u2245 m i u m g u P g C P E C P 1 i k 1 ) | ( ) | ( ) | ( (unigram),", "eq_num": "(5a)" } ], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "\u2211 \u2211 \u2212 = = \u2212 + \u2245 \u2245 1 1 i 1 i k 1 ) 1 ( 2 ) | ( 2 ) | ( ) | ( ) | ( m i b m i u m g b P m g u P g C P E C P (+bigram), (5b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "where u i is a unigram, b i is an on overlapping bigram of C, and m is the number of characters in C .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "For instance, assume that we have the first sense trunk-1 of the word trunk in WordNet and the translations in LDOCE as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "trunk-1 (the main stem of a tree; usually covered with bark; the bole is usually the part that is commercially useful for lumber),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "Translations of trunk -Initially, the probabilities of each translation for trunk-1 are as follows: Table 3 shows the words in the semantic class N001004001018013014 (stalk, stem), containing trunk-1 and relevant translations. Following Equations 4a and 4b, we took the unigrams and overlapping bigrams from these translations to calculate the probability of unigram and bigram translations for (stalk, stem). Although initially irrelevant translations such as bulb-(light bulb) can not be excluded, after one iteration of the maximization step, the noise is suppressed substantially, and the top ranking translations shown in Tables 4 and 5 seem to be the \"genus\" terms of the class. For instance, the top ranking unigrams for N001004001018013014 include (stem), gives a higher probabilistic value for the correct translation \" \" than the unigram-based approach does (0.76268783669 vs. 0.665950591). In this case, linear interpolation is a better parameter estimation scheme. Our experiments showed, in general, that combining both unigrams and bigrams does lead to better overall performance.", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 107, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 627, "end": 635, "text": "Tables 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "P( | trunk-1 ) = 1/4, P( | trunk-1 ) = 1/4, P( | trunk-1 ) = 1/4, P( | trunk-1 ) = 1/4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class-Based Translation Model", "sec_num": "3." }, { "text": "We carried out two experiments to see how well CBTM can be applied to assign appropriate translations to nominal senses in WordNet. In the first experiment, the translation probability was estimated using Chinese character unigrams, while in the second experiment, both unigrams and bigrams were used. The linguistic resources used in the experiments included:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4." }, { "text": "1. WordNet 1.6: WordNet contains approximately 116,317 nominal word senses organized into approximately 57,559 word meanings (synsets).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4." }, { "text": "Longman English-Chinese Dictionary of Contemporary English (LDOCE E-C): LDOCE is a learner's dictionary with 55,000 entries. Each word sense contains information, such as a definition, the part-of-speech, examples, and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "In our method, we take advantage of its wide coverage of frequently used senses and corresponding Chinese translations. In the experiments, we tried to restrict the translations to lexicalized words rather than descriptive phrases. We set a limit on the length of a translation: nine Chinese characters or less. Many of the nominal entries in WordNet are not covered by learner dictionaries; therefore, the experiments focused on those senses for which Chinese translations are available in LDOCE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "taxonomy, which brings together words with related meanings and lists them in topical/semantic classes with definitions, examples, and illustrations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Longman Lexicon of Contemporary English (LLOCE): LLOCE is a bilingual", "sec_num": "3." }, { "text": "The three tables shown in Figure 1 were generated in the course of the experiments:", "cite_spans": [], "ref_spans": [ { "start": 26, "end": 34, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Longman Lexicon of Contemporary English (LLOCE): LLOCE is a bilingual", "sec_num": "3." }, { "text": "1. The Translation Table has 44,726 entries and was easily constructed by extracting Chinese translations from LDOCE E-C [Proctor 1988 ].", "cite_spans": [ { "start": 121, "end": 134, "text": "[Proctor 1988", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 19, "end": 28, "text": "Table has", "ref_id": null } ], "eq_spans": [], "section": "Longman Lexicon of Contemporary English (LLOCE): LLOCE is a bilingual", "sec_num": "3." }, { "text": "We obtained the Sense Class Table by finding the common hypernyms of sets of words in LLOCE. 1,145 classes were used in the experiments. Table was constructed using the EM algorithm based on the T Table and SC Table. The CT Table contains 155,512 entries. Table 6 shows the results of using CBTM and Equation 1 to find the best translations for a word sense. We are concerned with the coverage of word senses in average text. In that sense, the translation of plant-3 is incorrect, but this error is not very significant, since this word sense is used infrequently. We chose the WordNet semantic concordance, SEMCOR, as our testing corpus. There are 13,494 distinct nominal word senses in SEMCOR. After the translation probability calculation step, our results covered 10,314 word senses in SEMCOR;", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 36, "text": "Table by", "ref_id": null }, { "start": 137, "end": 146, "text": "Table was", "ref_id": null }, { "start": 197, "end": 216, "text": "Table and SC Table.", "ref_id": null }, { "start": 224, "end": 238, "text": "Table contains", "ref_id": null }, { "start": 256, "end": 263, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "thus, the coverage rate was 76.43%. To see how well the model assigns translations to WordNet senses appearing in average text, we randomly selected 500 noun instances from SEMCOR as our test data. There were two senses in WordNet, while 70 words had three senses in WordNet, and so on. The average degree of sense ambiguity was 4.2. Among our 500 test data, 280 entries were the first sense, while 112 entries were the second sense. Over half of the words had the meaning of the first sense. Therefore, the first sense was most frequently used. Therefore, it was found to be more important to get the first and the second senses right. We manually gave each word sense an appropriate Chinese translation whenever one was available from LDOCE. From these translations, we found the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class Translation", "sec_num": "3." }, { "text": "1. There were 491 word senses for which corresponding translations were available from LDOCE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class Translation", "sec_num": "3." }, { "text": "2. There were 5 word senses for which no relevant translations could be found in LDOCE due to the limited coverage of this learner's dictionary. Those word senses and relevant translations included assignment-2 ( ), marriage-3 ( ), snowball-1( ), prime-1( ), and program-7 ( ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class Translation", "sec_num": "3." }, { "text": "3. There were 4 words, that have no translations due to the particular cross-referencing scheme of LDOCE. Under this scheme, some nouns in LDOCE are not directly given a definition and translation, but rather a pointer to a more frequently used spelling. For instance, \"groom\" is given a pointer to \"BRIDEGROOM\" rather than the relevant definition and translation (\" \").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class Translation", "sec_num": "3." }, { "text": "In the first experiment, we started out by ranking the relevant translations for each noun sense using the class-based translation model. If two translations had the same probabilistic value, we gave them the same rank. For instance, Table 8 shows that the top 1 translation for plant-1 was \" .\" Table 8 . The rank of each translation corresponding to each word sense. (plant-2, ) and (plant-2, ) have the same probability and rank. We used the same method to evaluate the recall rate in the second experiment, where both unigrams and bigrams were used. The experimental results show a slight improvement over the results obtained using only unigrams.", "cite_spans": [ { "start": 369, "end": 380, "text": "(plant-2, )", "ref_id": null }, { "start": 385, "end": 396, "text": "(plant-2, )", "ref_id": null } ], "ref_spans": [ { "start": 234, "end": 241, "text": "Table 8", "ref_id": null }, { "start": 296, "end": 303, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "The Class Translation", "sec_num": "3." }, { "text": "In these experiments, we estimated the translation probability based on unigrams and bigrams. The evaluation results confirm our observation that we can exploit shared characters in translations of semantically related senses to obtain relevant translations. We evaluated the experimental results based on whether the Top 1 to Top 5 translations covered all appropriate translations. If we selected the Top 1 translation in the first experiment as the most appropriate translation, there were 344 correct entries, and the recall rate was 68.8%. The Top 2 translations covered 408 correct entries, and the recall rate was 81.6%. Table 9 shows the recall rate with regard to the number of top-ranking translations used for the purpose of evaluation.", "cite_spans": [], "ref_spans": [ { "start": 628, "end": 635, "text": "Table 9", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "The Class Translation", "sec_num": "3." }, { "text": "Building A Chinese WordNet Via Class-Based Translation Model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Class Translation", "sec_num": "3." }, { "text": "In this paper, a statistical class-based translation model for the semi-automatic construction of a Chinese WordNet has been proposed. Our approach is based on selecting the appropriate Chinese translation for each word sense in WordNet. We observe that a set of semantically related words tend to share some Chinese characters in their Chinese translations. We propose to rely on the knowledge base of a Class Based Translation Model derived from statistical analysis of the relationship between semantic classes in WordNet and translations in the bilingual version of the Longman Dictionary of Contemporary English (LDOCE). We carried out two experiments that show that CBTM is effective in speeding up the construction of a Chinese WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "The first experiment was based on the translation probability of unigrams, and the second was based on both unigrams and bigrams. Experimental results show that the method produces a Chinese WordNet covering 76.43% of the nominal senses in SEMCOR, which implies that a high percentage of the word senses can be effectively handled. Among our 500 testing cases, the recall rate was around 70%, 80% and 90%, respectively, when the Top 1, Top 2, and Top 3 translations were evaluated. The recall rate when using both unigrams and bigrams was slightly higher than that when using only unigrams. Our results can be used to assist the manual editing of word sense translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "A number of interesting future directions present themselves. First, obviously, there is potential for combining two or more methods to get even better results in connecting WordNet senses with translations. Second, although nouns are most important for information retrieval, other parts of speech are important for other applications. We plan to extend the method to verbs, adjectives and adverbs. Third, the translations in a machine readable dictionary are at times not very well lexicalized. The translations in a bilingual corpus cauld be used to improve the degree of lexicalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." } ], "back_matter": [ { "text": "This study was partially supported by grants from the National Science Council (NSC 90-2411-H-007-033-MC) and the MOE (project EX 91-E-FA06-4-4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Mapping Multilingual Hierarchies using Relaxation Labelling", "authors": [ { "first": "J", "middle": [], "last": "Daud\u00e9", "suffix": "" }, { "first": "L", "middle": [], "last": "Padr\u00f3", "suffix": "" }, { "first": "G", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 1999, "venue": "Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daud\u00e9, J., L. Padr\u00f3 and G. Rigau, \"Mapping Multilingual Hierarchies using Relaxation Labelling,\" Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 1999", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Mapping WordNets using Structural Information", "authors": [ { "first": "J", "middle": [], "last": "Daud\u00e9", "suffix": "" }, { "first": "L", "middle": [], "last": "Padr\u00f3", "suffix": "" }, { "first": "G", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daud\u00e9, J., L. Padr\u00f3 and G. Rigau, \"Mapping WordNets using Structural Information,\" Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, 2000.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Longman Lexicon of Contemporary English", "authors": [ { "first": "T", "middle": [], "last": "Mcarthur", "suffix": "" } ], "year": 1992, "venue": "Longman Group (Far East) Ltd", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McArthur, T., \"Longman Lexicon of Contemporary English,\" Longman Group (Far East) Ltd., Hong Kong, 1992.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A method for Word Sense Disambiguation of unrestricted text", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "D", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "152--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihalcea, R. and D. Moldovan., \"A method for Word Sense Disambiguation of unrestricted text,\" Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, 1999, pp. 152-158.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Five papers on WordNet", "authors": [ { "first": "G", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "3", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, G., \"Five papers on WordNet,\" International Journal of Lexicography, 3(4), 1990.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Informative Role of WordNet in Open-Domain Question Answering", "authors": [ { "first": "M", "middle": [], "last": "Pasca", "suffix": "" }, { "first": "S", "middle": [], "last": "Harabagiu", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the NAACL 2001 Workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations", "volume": "", "issue": "", "pages": "138--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pasca, M. and S. Harabagiu, \"The Informative Role of WordNet in Open-Domain Question Answering,\" in Proceedings of the NAACL 2001 Workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations, June 2001, Carnegie Mellon University, Pittsburgh PA, pp. 138-143.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Longman English-Chinese Dictionary of Contemporary English", "authors": [ { "first": "P", "middle": [], "last": "Proctor", "suffix": "" } ], "year": 1988, "venue": "Longman Group (Far East) Ltd", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Proctor, P., \"Longman English-Chinese Dictionary of Contemporary English,\" Longman Group (Far East) Ltd., Hong Kong, 1988.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Disambiguating Highly Ambiguous Words", "authors": [ { "first": "G", "middle": [], "last": "Towell", "suffix": "" }, { "first": "E", "middle": [], "last": "Voothees", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "1", "pages": "125--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Towell, G. and E. Voothees, \"Disambiguating Highly Ambiguous Words,\" Computational Linguistics, 24(1) 1998, pp. 125-146.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Multilingual Design of the EuroWordNet Database", "authors": [ { "first": "P", "middle": [], "last": "Vossen", "suffix": "" }, { "first": "P", "middle": [], "last": "Diez-Orzas", "suffix": "" }, { "first": "W", "middle": [], "last": "Peters", "suffix": "" } ], "year": 1997, "venue": "Processing of the IJCAI-97 workshop Multilingual Ontologies for NLP Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vossen, P., P. Diez-Orzas and W. Peters, \"The Multilingual Design of the EuroWordNet Database,\" Processing of the IJCAI-97 workshop Multilingual Ontologies for NLP Applications, 1997.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A syntax-lexical semantics interface analysis of collocation errors", "authors": [ { "first": "D", "middle": [], "last": "Wible", "suffix": "" }, { "first": "A", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wible, D. and A. Liu, \"A syntax-lexical semantics interface analysis of collocation errors,\" PacSLRF 2001.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "indicate the general concepts of the class.With the unigram translation probability P( u | g), one can apply Equations 5a and 6 to proceed with the Expectation Step and calculate the probability of each translation candidate for a word sense as shown in Example 1: trunk-1 ) = 0.0124/(0.0124+0.00054+0.00283+0.00285) = 0.0290010741, P ( | trunk-1 ) = 0.0124/(0.0124+0.00054+0.00283+0.00285) = 0.1519871106, P ( | trunk-1 ) = 0.0124/(0.0124+0.00054+0.00283+0.00285) = 0.1530612245.Using simple linear interpolation of translation unigrams and bigrams (Equation 5b), the probability of each translation candidate for a word sense can be calculated as shown", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "text": "Words in the same conceptual class that often share common Chinese characters in their translations.", "content": "
Code (set title)Hyponyms
fish (aquatic vertebrate)carp
fish (aquatic vertebrate)catfish
fish (aquatic vertebrate)eel
complex (building)factory
complex (building)cannery
complex (building)mill
speech (communication)discussion;
", "html": null, "num": null }, "TABREF2": { "type_str": "table", "text": "Words in four classes related to the noun plant.", "content": "
EnglishWN senseClass CodeWords in the Class
Plant1N001004003030factory, mill, assembly plant, \u2026
Plant2N001001005flora, plant life, \u2026
Plant3N001001015008thought, idea, \u2026
Plant4N001001003001001producer, supernatural, \u2026
Plant4N001003001002001announcer, conceiver, \u2026
For instance, plant-1 belongs to the class g represented by the WordNet synset (structure,
construction):
", "html": null, "num": null }, "TABREF3": { "type_str": "table", "text": "", "content": "
T TableSC TableCT Table
English WordChinese WordEnglish WordWN SensePOS Class CodeClassTranslation CharacterProb.
plantplant1nN001004003030N0010040030300.0178
plantplant2nN001001005N0010040030300.0174
plantplant3nN001001015008N0010040030300.0088
plantplant4nN001001003001001N0010040030300.0073
plantplant4nN001003001002001
plantN0010010050.0161
N0010010050.0161
TranslationSemantic Class TableClass Translation Table
Table
BST TableCWN Table
English WordSense No.POSChinese WordProb.English WordSense No.POSChinese Word
plant 1n0.0124plant 1n
plant 1n0.0028plant 2n
plant 1n0.0023
plant 1n0.0014
plant 1n0.0013
plant 1n
", "html": null, "num": null }, "TABREF4": { "type_str": "table", "text": "Words and their translations in the semantic class N001004001018013014", "content": "
English E WN sense k G(E k )Chinese Translation
Beanstalk 1N001004001018013014
Bole2N001004001018013014
Branch2N001004001018013014
Branch2N001004001018013014
Branch2N001004001018013014
Brier2N001004001018013014
Bulb1N001004001018013014
Bulb1N001004001018013014
Cane2N001004001018013014
Cutting 2N001004001018013014
Cutting 2N001004001018013014
Stick2N001004001018013014
Stick2N001004001018013014
Stem2N001004001018013014
Stem2N001004001018013014
", "html": null, "num": null }, "TABREF5": { "type_str": "table", "text": "Probabilities of each unigram for the semantic class containing trunk-1, etc.", "content": "
Unigram (u) Semantic Class Code (g)P( u | g )
", "html": null, "num": null }, "TABREF6": { "type_str": "table", "text": "Probabilities of each bigram for the semantic class containing trunk-1, etc.", "content": "
Bigram (b)Semantic Class Code (g)P( b | g )
N0010040010180130140.0287
N0010040010180130140.0269
N0010040010180130140.0145
N0010040010180130140.0144
N0010040010180130140.0134
\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026\u2026
Both examples show that the class-based translation model produces reasonable
probabilistic values. The examples also show that for trunk-1, the linear interpolation method
", "html": null, "num": null }, "TABREF7": { "type_str": "table", "text": "The results and appropriate translations for each sense of the English word.", "content": "
EnglishWN senseChinese TranslationAppropriate Chinese Translation
Plant1
Plant2
Plant3
Plant4
Spur1
Spur2,
Spur4
Spur5
Bank1
Bank2
Bank3,
Scale1
Scale2
Scale3
Scale5
Scale6
", "html": null, "num": null }, "TABREF8": { "type_str": "table", "text": "The degree of ambiguity and number of words in the test data with different degree of ambiguity.", "content": "
Degree of ambiguity # of senses in WordNet# of word types in the test dataExamples
175aptitude, controversy, regret
277camera, fluid, saloon
370drain, manner, triviality
451confusion, fountain, lesson
535isolation, pressure, spur
625blood, creation, seat
728column, growth, mind
89contact, hall. program
97body, company, track
108bank, change, front
>1025control, corner, deaft
", "html": null, "num": null }, "TABREF9": { "type_str": "table", "text": "The recall rate in the first experiment", "content": "
English Semantic classWN senseChinese TranslationProbabilityRank
PlantN001004003030 (structure)10.0123721
PlantN001004003030 (structure)10.0028232
PlantN001004003030 (structure)10.0022703
PlantN001004003030 (structure)10.0013754
PlantN001004003030 (structure)10.0012785
PlantN001004003030 (structure)10.0001306
PlantN001001005 (flora)20.0160841
PlantN001001005 (flora)20.0026232
PlantN001001005 (flora)20.0008743
PlantN001001005 (flora)20.0005254
PlantN001001005 (flora)20.0005254
PlantN001001005 (flora)20.0003605
The number of top-rankingCorrect EntriesRecall rateRecall rate
translations(Total entries =500)(unigram)(unigram+bigram)
Top 134468.8%70.2%
Top 240881.6%83.2%
Top 344188.2%89.0%
Top 444989.8%91.4%
Top 546292.4%93.2%
", "html": null, "num": null } } } }