{ "paper_id": "O06-4003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:07:44.844020Z" }, "title": "Sense Extraction and Disambiguation for Chinese Words from Bilingual Terminology Bank", "authors": [ { "first": "Ming-Hong", "middle": [], "last": "Bai", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": {} }, "email": "mhbai@sinica.edu.tw" }, { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": {} }, "email": "kchen@iis.sinica.edu.tw" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "jschang@cs.nthu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Using lexical semantic knowledge to solve natural language processing problems has been getting popular in recent years. Because semantic processing relies heavily on lexical semantic knowledge, the construction of lexical semantic databases has become urgent. WordNet is the most famous English semantic knowledge database at present; many researches of word sense disambiguation adopt it as a standard. Because of the success of WordNet, there is a trend to construct WordNet in different languages. In this paper, we propose a methodology for constructing Chinese WordNet by extracting information from a bilingual terminology bank. We developed an algorithm of word-to-word alignment to extract the English-Chinese translation-equivalent word pairs first. Then, the algorithm disambiguates word senses and maps Chinese word senses to WordNet synsets to achieve the goal. In the word-to-word alignment experiment, this alignment algorithm achieves the f-score of 98.4%. In the word sense disambiguation experiment, the extracted senses cover 36.89% of WordNet synsets and the accuracy of the three proposed disambiguation rules achieve the accuracies of 80%, 83% and 87%, respectively.", "pdf_parse": { "paper_id": "O06-4003", "_pdf_hash": "", "abstract": [ { "text": "Using lexical semantic knowledge to solve natural language processing problems has been getting popular in recent years. Because semantic processing relies heavily on lexical semantic knowledge, the construction of lexical semantic databases has become urgent. WordNet is the most famous English semantic knowledge database at present; many researches of word sense disambiguation adopt it as a standard. Because of the success of WordNet, there is a trend to construct WordNet in different languages. In this paper, we propose a methodology for constructing Chinese WordNet by extracting information from a bilingual terminology bank. We developed an algorithm of word-to-word alignment to extract the English-Chinese translation-equivalent word pairs first. Then, the algorithm disambiguates word senses and maps Chinese word senses to WordNet synsets to achieve the goal. In the word-to-word alignment experiment, this alignment algorithm achieves the f-score of 98.4%. In the word sense disambiguation experiment, the extracted senses cover 36.89% of WordNet synsets and the accuracy of the three proposed disambiguation rules achieve the accuracies of 80%, 83% and 87%, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Using lexical semantic knowledge to solve natural language processing problems has been getting popular in recent years. Especially for word sense disambiguation, the semantic lexicon plays a very important role. However, all semantic approaches depend on knowledge of some well established semantic lexical databases which provide semantic information of words, such as the different senses of a word, the synonymous or hyperonymy relation between words, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "WordNet is a famous semantic lexical database which owns rich lexical information. [Miller 1990 ]. It not only covers a large set of vocabularies but also establishes a complete taxonomic structure for word senses. Synonymous word senses are grouped into synsets. These synsets are further associated by semantic relations, including hypernyms, hyponyms, holonyms, meronyms, etc. The WordNet has been applied to a wide range of applications, such as word sense disambiguation, information retrieval, computer-assisted language learning, etc. It has apparently become the de facto standard for English word senses now.", "cite_spans": [ { "start": 83, "end": 95, "text": "[Miller 1990", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Because of the success of WordNet, there is a universally shared interest in construction of WordNet-like and WordNet-embedded lexical databases in different languages. One of the most famous projects is EuroWordNet (EWN). Its goal is to construct a WordNet-like system containing several European languages. Since constructing a WordNet for a new language is a difficult and labor intensive task, using the resources of WordNet to speed up the construction has begun a new trend. Many researchers, such as [Atserias et al. 1997] , [Daude et al. 1999] and [Chang et al. 2003] , have tried to associate WordNet synsets to other languages automatically with appropriate translations from bilingual dictionaries. The limitation of using bilingual dictionaries as mapping tables for translation equivalences between two languages is the narrow scopes of the dictionaries, since dictionaries usually contain prototypical translations only. For example, the first sense of word \"plant\" in WordNet is \"plant, works, industrial plant\"; it was translated as \"GongChang\"(\u5de5\u5ee0) in a Chinese-English bilingual dictionary. However, in actual text, it may be also translated as \"Chang\"( \u5ee0 ), \"GongChang\"(\u5de5\u5834), \"ChangFang\"(\u5ee0\u623f), \"suo\"(\u6240, such as 'power plant'/\u767c\u96fb\u6240), etc. Various translations, obviously, add complexity and difficulty to map word senses into WordNet synsets.", "cite_spans": [ { "start": 507, "end": 529, "text": "[Atserias et al. 1997]", "ref_id": "BIBREF0" }, { "start": 532, "end": 551, "text": "[Daude et al. 1999]", "ref_id": null }, { "start": 556, "end": 575, "text": "[Chang et al. 2003]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Instead of using bilingual dictionaries, we adopt a bilingual terminology bank as the semantic lexical database. The latter includes various compound words, in which a word in a different compounding structure may have different translations, thus there are more translation candidates which can be chosen. A bilingual terminology bank has not only helped to avoid the problem of the limited scope of prototypical translations made by common bilingual dictionaries, but has also helped to disambiguate word senses by various translations and collocations [Diab et al. 2002] , [Bhattacharya 2004 ]. Nevertheless, using bilingual terminology banks has to face two main challenges: Firstly, we have to deal with the problem of word-to-word alignment for multi-words terms. Secondly, we have to solve the problem of sense ambiguity of the English translation. The approaches for solving these two problems are the major focuses of the paper.", "cite_spans": [ { "start": 555, "end": 573, "text": "[Diab et al. 2002]", "ref_id": "BIBREF8" }, { "start": 576, "end": 594, "text": "[Bhattacharya 2004", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of paper is divided into four sections. Section 2 introduces the resources of this Chinese Words from Bilingual Terminology Bank paper. Section 3 describes the methodology. Experimental setup and results will be addressed in Section 4. A conclusion is provided in Section 5 along with directions for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this study, we use two dictionaries as the resources to extract semantic information:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources", "sec_num": "2." }, { "text": "a) The Bilingual Terminology Bank from NICT [NICT 2004] b) A English-Chinese dictionary [Proctor 1988] The Bilingual Terminology Bank from NICT contains 63 classes of terminologies, with a total of 1,046,058 Chinese terms with their English translations. Among them, 629,352 terms are compounds, which is about 60 percent of the total. The English-Chinese dictionary contains 208,163 words which are used as a supplement. We also adopt WordNet 2.0 as the medium for sense linking. Figure 1 shows some sample entries of the Bilingual Terminology Bank from NICT. In English, a compound is usually composed of words and blanks; the latter being a natural boundary to separate words. On the contrary, in Chinese there are no blanks in compound words, so we need to segment words before applying word alignment algorithms. In this paper, we adopt the CKIP Chinese Word Segmentation System, which was developed by the CKIP group of Academia Sinica [CKIP 2006] .", "cite_spans": [ { "start": 44, "end": 55, "text": "[NICT 2004]", "ref_id": null }, { "start": 88, "end": 102, "text": "[Proctor 1988]", "ref_id": "BIBREF13" }, { "start": 942, "end": 953, "text": "[CKIP 2006]", "ref_id": null } ], "ref_spans": [ { "start": 481, "end": 489, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Resources", "sec_num": "2." }, { "text": "The algorithm can be divided into the following two steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3." }, { "text": "1. Find the word to word alignment for each entry in the terminology bank, 2. Assign a synset to the Chinese word sense by resolving the sense ambiguities of its aligned English word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3." }, { "text": "The first step is to find all possible English translations for each Chinese word, which make it possible to link Chinese words to WordNet synsets. Since the English translation may be ambiguous, the purpose of second step is to employ a word sense disambiguation algorithm to select the appropriate synset for the Chinese word. For example, the term pair (water tank, \u6c34 \u69fd ) will be aligned as (water/\u6c34 tank/\u69fd ) in the first step, so the Chinese word \u69fd can be linked to WordNet synsets by its translation tank. But tank has five senses in WordNet as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3." }, { "text": "tank_n_1: an enclosed armored military vehicle, tank_n_2: a large vessel for holding gases or liquids, tank_n_3: as much as a tank will hold, tank_n_4: a freight car that transports liquids or gases in bulk, tank_n_5: a cell for violent prisoners.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3." }, { "text": "The second step is applied to select the best sense translation. In the following subsections, we will describe the detail algorithm of word alignment in section 3.1 and word sense disambiguation in section 3.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3." }, { "text": "For a Chinese term and its English translation, it is natural to think that the Chinese term is translated from the English term word for word. So, the purpose of word alignment is to connect the words which have a translation relationship between the Chinese term and its English portion. In past years, several statistical-based word alignment methods have been proposed. [Brown et al. 1993] proposed a method of word alignment which consists of five translation models, also known as the IBM translation models. Each model focuses on some features of a sentence pair to estimate the translation probability. [Vogel et al. 1996] proposed the Hidden-Markov alignment model which makes the alignment probabilities dependent on the alignment position of the previous word rather than on the absolute positions. [Och and Ney 2000] proposed some methods to adjust the IBM models to improve alignment performance.", "cite_spans": [ { "start": 374, "end": 393, "text": "[Brown et al. 1993]", "ref_id": "BIBREF3" }, { "start": 611, "end": 630, "text": "[Vogel et al. 1996]", "ref_id": "BIBREF14" }, { "start": 810, "end": 828, "text": "[Och and Ney 2000]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "3.1" }, { "text": "The word alignment task in this paper only focuses on the term pairs of a bilingual terminology bank. Since the length of a term is usually far less than a sentence, some features, such as word position, are no longer important in the task. In this paper, we employ the IBM-1 model, which only focuses on lexical generating probability, to align the words of a bilingual terminology bank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "3.1" }, { "text": "For convenience, we follow the notion of [Brown et al. 1993] , which defines word alignment as follows:", "cite_spans": [ { "start": 41, "end": 60, "text": "[Brown et al. 1993]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling Word Alignment", "sec_num": "3.1.1" }, { "text": "Suppose we have a English term e = e 1 ,e 2 ,\u2026,e n where e i is an English word, and its corresponding Chinese term c = c 1 ,c 2 ,\u2026,c m where c j is a Chinese word. An alignment from e to c can be represented by a series a=a 1 ,a 2 ,\u2026,a m where each a j is an integer between 0 and n, such that if c j is partial (or total) translation of e i , then a j = i and if it is not translation of any English word, then a j =0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Word Alignment", "sec_num": "3.1.1" }, { "text": "For example, the alignments shown in Figure 2 are two possible alignments from English to Chinese for the term pair (practice teaching, \u6559\u5b78 \u5be6\u7fd2), (a) can be represented by a=1,2 while (b) can be represented by a=2,1. In the word alignment stage, given a pair of terms c and e, we want to find the most likely alignment a=a 1 ,a 2 ,\u2026,a m , to maximize the alignment probability P(a|c,e) for the pair. The formula can be represented as follows:", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 45, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Modeling Word Alignment", "sec_num": "3.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u02c6arg max ( | , ) P = a a ace,", "eq_num": "(1)" } ], "section": "Modeling Word Alignment", "sec_num": "3.1.1" }, { "text": "where \u00e2 is the best alignment of the possible alignments. Suppose we already have lexical translation probabilities for each of the lexical pairs, then, the alignment probability P(a|c,e) can be estimated by means of the lexical translation probabilities as follows: The probability of c given e, P(c|e), is a constant for a given term pair (c,e), so formula 1 can be estimated as follows: . 2For example, the probability of the alignment shown in Figure 2 (a) can be estimated by:", "cite_spans": [], "ref_spans": [ { "start": 448, "end": 456, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Modeling Word Alignment", "sec_num": "3.1.1" }, { "text": "1 ( , | ) ( | , ) ( | )/ ( | ) ( | )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Word Alignment", "sec_num": "3.1.1" }, { "text": "P(c 1 |e 1 )P(c 2 |e 2 ) = P( \u6559\u5b78 | practice) P( \u5be6\u7fd2 | teaching) = 0.000480 x 1.14x10 -13 =5.48x10 -17 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Word Alignment", "sec_num": "3.1.1" }, { "text": "While (b) can be estimated by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Word Alignment", "sec_num": "3.1.1" }, { "text": "P(c 1 |e 2 )p(c 2 |e 1 ) = P( \u6559\u5b78 | teaching)P( \u5be6\u7fd2 | practice ) = 0.6953 x 0.0940 = 0.0654.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Word Alignment", "sec_num": "3.1.1" }, { "text": "In this example, the probability of alignment (b) is larger than (a) in Figure 2 . So the alignment (b), (\u6559\u5b78/teaching \u5be6\u7fd2/practice), is a better choice than (a), (\u6559\u5b78/practice \u5be6\u7fd2 /teaching), for the term pair (practice teaching, \u6559\u5b78 \u5be6\u7fd2). The remaining problem of this stage is how to estimate the translation probability p(c|e) for all possible English-Chinese lexical pairs.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 80, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Modeling Word Alignment", "sec_num": "3.1.1" }, { "text": "The method of our translation probability estimation uses the IBM model 1 [Brown et al. 1993] , which is based on the EM algorithm [Dempster et al. 1977] , for maximizing the likelihood of generating the Chinese terms, which is the target language, given the English portion, which is the source language. Suppose we have an English term e and its Chinese translation c in the terminology bank T; e is a word in e, and c is a word in c. The probability of word c given word e, P(c|e), can be estimated by iteratively re-estimating the following EM formulae:", "cite_spans": [ { "start": 74, "end": 93, "text": "[Brown et al. 1993]", "ref_id": "BIBREF3" }, { "start": 131, "end": 153, "text": "[Dempster et al. 1977]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "Initialization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "1 ( | ) | | P c e C = ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "(3) E-step: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 ' 1 ' ' ( | ) ( , | ) ( | , ) ( ', | )", "eq_num": "(4)" } ], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "Chinese Words from Bilingual Terminology Bank M-step: In the EM training process, we initially assume that the translation probability for any Chinese word c given English word e, P(c|e), is uniformly distributed as in formula 3, where C denotes the set of all Chinese words in the terminology bank. In the E-step, we estimate the expected number of times that e connects to c in the term pair (c,e). As in formula 4, we sum up the expected counts of the connection from e to c over all possible alignments which contain the connection. Formula 5 is the detailed definition of the probability of an alignment a given (c,e). Usually, it is hard to evaluate the formulae in E-step. Fortunately, it has been proven [Brown et al. 1993 ] that the expectation formulae, 4 and 5, can be merged and simplified as follows: After merging and simplifying, as formula 7, the E-step becomes very simple and effective for computing.", "cite_spans": [ { "start": 712, "end": 730, "text": "[Brown et al. 1993", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "| | ( ) ( ) 1 | | ( ) ( ) 1 ( , ; , ) ( | ) ( , ; , ) T t t t T t t t v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "In the M-step, we re-estimate the translation probability, P(c|e). As shown in formula 6, we sum up the expected number of connections from e to c over the whole bank divide by the expected number of c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "The training process will count the expected number, E-step, and re-estimate the translation probability, M-step, iteratively until it has converged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "For instance, as the example shown in Figure 2 , the English term e= practice teaching and Chinese term c=\u6559\u5b78 \u5be6\u7fd2 are given. Assume the total number of Chinese words in the terminology bank is 100,000. Initially, the probabilities of each translation are as follows:", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 46, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "P( \u6559\u5b78 | practice) = 1 | | C = 0.00001, P( \u6559\u5b78 | teaching) = 1 | | C = 0.00001, P( \u5be6\u7fd2 | practice) = 1 | | C = 0.00001, P( \u5be6\u7fd2 | teaching) = 1 | | C = 0.00001.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "In E-step, we count the expected number for all possible connections in the term pair: In M-step, we first count the global expected number of each translation by summing up the expected number of each data entry over the whole term bank: After the global expected number of each translation has been counted, we can re-estimate the translation probabilities by means of the expected numbers: = 0.00632, ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "Z( \u6559\u5b78 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "| | ( ) ( ) 1 ( , ; , ) T t t t Z practice = \u2211 \u6559\u5b78 e c =0.7, | | ( ) ( ) 1 ( ,", "eq_num": "; , ) T t" } ], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P( \u6559\u5b78 | practice) = | | ( ) ( ) 1 | | ( ) ( ) 1 ( ,", "eq_num": "; , ) ( , ; , )" } ], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P( \u6559\u5b78 | teaching) = | | ( ) ( ) 1 | | ( ) ( ) 1 ( ,", "eq_num": "; , ) ( , ; , ) T t" } ], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P( \u5be6\u7fd2 | practice) = | | ( ) ( ) 1 | | ( ) ( ) 1 ( ,", "eq_num": "; , ) ( , ; , ) T t" } ], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u5be6\u7fd2 | teaching) = | | ( ) ( ) 1 | | ( ) ( ) 1 ( ,", "eq_num": "; , ) ( , ; , ) T t" } ], "section": "Translation Probability Estimation", "sec_num": "3.1.2" }, { "text": "As was mentioned in Section 3.1.1, the goal of word alignment is to find the best alignment candidate to maximize the translation probability of a term pair. However, in real situations there are some problems that have to be solved:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imposing Alignment Constraints", "sec_num": "3.1.3" }, { "text": "1. Cross connections: assume there is a series of words, c j ,c j+1 ,c j+2 in a Chinese term, if c j and c j+2 connect to the same English word while c j+1 connects to any other word, we call this Chinese Words from Bilingual Terminology Bank alignment contains a cross connection. There is an example of cross connection shown in Figure 7 . The Chinese word \u6821 is more likely to connect to examination shown in Figure 8 . 2. Function words: in word alignment stage, function words are usually ignored except when they are part of compound words. For example, Figure 9 , of is a part of a compound which can not be skipped, while in Figure 10 , of can be skipped. In order to solve this problem, two constraints are imposed on the alignment algorithm. Formula 1 is altered by using a cost function instead of probability, defined as follows:", "cite_spans": [], "ref_spans": [ { "start": 331, "end": 339, "text": "Figure 7", "ref_id": null }, { "start": 411, "end": 419, "text": "Figure 8", "ref_id": null }, { "start": 559, "end": 567, "text": "Figure 9", "ref_id": "FIGREF13" }, { "start": 632, "end": 641, "text": "Figure 10", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Imposing Alignment Constraints", "sec_num": "3.1.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "arg min ( ) cost = a a a,", "eq_num": "(8)" } ], "section": "Imposing Alignment Constraints", "sec_num": "3.1.3" }, { "text": "where cost function is given by: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imposing Alignment Constraints", "sec_num": "3.1.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u221e = \u23aa \u23aa \u23aa \u23aa = \u221e \u23a8 \u23aa \u23aa \u23aa \u2212 \u23aa \u23aa \u23a9 \u2211 a a .", "eq_num": "(9)" } ], "section": "Imposing Alignment Constraints", "sec_num": "3.1.3" }, { "text": "The cross connection function is used to detect the cross connection in an alignment candidate. If a cross connection is found, the alignment candidate will be assigned a large cost value. The function was given by: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Imposing Alignment Constraints", "sec_num": "3.1.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2260 = \u23a7 = \u23a8 \u23a9 a .", "eq_num": "(10)" } ], "section": "Imposing Alignment Constraints", "sec_num": "3.1.3" }, { "text": "There are two connection directions in word alignment: from Chinese to English, (where Chinese is the source language while English is the target language), and from English to Chinese. The alignment method of the IBM models has a restriction; a word of target language can only be connected to exactly one word of the source language. This restriction causes two words in the source language not to be able to connect to a word in the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Connection Directions", "sec_num": "3.1.4" }, { "text": "For example, in Figure 11 , for alignment from Chinese to English, cedar should be connected to both \u96ea and \u677e, but the model does not allow the connection in this direction. ", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 25, "text": "Figure 11", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Connection Directions", "sec_num": "3.1.4" }, { "text": "In order to solve this problem, the alignments of these two directions are merged using the following steps: 1. Align from Chinese to English. Each word of an English compound will be connected by the same Chinese word in this step which will be treated as an alignment unit in the next step. 2. Align from English to Chinese. Each word of a Chinese compound will be connected to the same English unit, a word or merged compound, in this step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 12.\u842c\u6709\u5f15\uf98a can not be connected by both universal and gravitation in this direction.", "sec_num": null }, { "text": "For example, universal gravitation was merged in step 1 while \u96ea and \u677e were not merged in the same step, as shown in Figure 13 . In step2, \u96ea and \u677e were merged and universal gravitation will be treated as a unit in the same step, as shown in Figure 14 . After these two steps, all of the compounds in each language will be merged. Figure 15 shows some examples of word alignment in these experiments.", "cite_spans": [], "ref_spans": [ { "start": 116, "end": 125, "text": "Figure 13", "ref_id": "FIGREF0" }, { "start": 240, "end": 249, "text": "Figure 14", "ref_id": "FIGREF0" }, { "start": 329, "end": 338, "text": "Figure 15", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Figure 12.\u842c\u6709\u5f15\uf98a can not be connected by both universal and gravitation in this direction.", "sec_num": null }, { "text": "Chinese ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English Term", "sec_num": null }, { "text": "When we tag Chinese words with WordNet senses, if the translation of a word has only one sense, a monosemous word, it can be tagged with that sense directly. If the translation has more than one sense, we should use a disambiguation method to get the appropriate sense. In past years, a lot of word sense disambiguation (WSD) methods have been proposed, including supervised, bootstrapping, and unsupervised. Supervised and bootstrapping methods usually resolve an ambiguity in the collocations of the target word, which implies that the target word should be in a complete sentence. These are not appropriate for this project's data. When some statistical based unsupervised methods are not accurate enough, they will add too much noise to the results. For the purpose of building a high quality dictionary, we tend to use a high precision WSD method which should also be appropriate for a bilingual term bank. We employ some heuristic rules, which are motivated by [Atserias et al. 1997] , described as follows:", "cite_spans": [ { "start": 967, "end": 989, "text": "[Atserias et al. 1997]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Sense Tagging", "sec_num": "3.2" }, { "text": "Heuristic 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sense Tagging", "sec_num": "3.2" }, { "text": "If e i is a morpheme of e then pick the sense of e i , say s j , which contains hyponym e. This heuristic rule works for head morphemes of compounds. For example, as shown in figure 16 , the term pair (water tank, \u6c34 \u69fd ) is aligned as (water/\u6c34 tank/\u69fd ). There are five senses for tank. The above heuristic rule will select tank-2 as the sense of tank/\u69fd because there is only one sense of water tank and the sense is a hyponym of tank-2. In this case, the sense of water tank can be tagged as water tank-1 and tank can be tagged as tank-2. Figure 16. water tank-1 is a hyponym of tank-2 .", "cite_spans": [], "ref_spans": [ { "start": 175, "end": 184, "text": "figure 16", "ref_id": "FIGREF0" }, { "start": 538, "end": 584, "text": "Figure 16. water tank-1 is a hyponym of tank-2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Sense Tagging", "sec_num": "3.2" }, { "text": "Heuristic 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sense Tagging", "sec_num": "3.2" }, { "text": "Suppose the set {e 1 ,e 2 ,\u2026,e k } contains all possible translations of Chinese word c,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sense Tagging", "sec_num": "3.2" }, { "text": "Case 1: If {e 1 ,e 2 ,\u2026,e k } share a common sense s t , then pick s t as their sense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sense Tagging", "sec_num": "3.2" }, { "text": "Case 2: If one element of the set {e 1 ,e 2 ,\u2026,e k }, say e i , has a sense s t which is the hypernym of synsets corresponding to the rest of the words. We say that they nearly share the same sense and pick s t as the sense e i , pick the corresponding hyponyms as the sense of the rest of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sense Tagging", "sec_num": "3.2" }, { "text": "An example of case 1 is the translations of \u8173\u8e0f\uf902, {bicycle, bike, wheel}, which are a subset of a synset. This means that the synset is the common sense of these words and we can pick it as the words' sense. An example of case 2, as shown in figure 17, is the translations of \u4fe1\u865f\u65d7, {signal, signal flag, code flag}, although these words do not exactly share the same sense, one sense of signal is the hypernym of signal flag and code flag. This means that they nearly share the same sense; we pick the hypernym, signal-1, as the sense of signal and the corresponding hyponyms as the sense of signal flag and code flag. If some of the translations of c are tagged in the previous steps and the results show that the translations of c is always tagged with the same sense, we think c to have mono sense, so pick that sense as the sense of untagged translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sense Tagging", "sec_num": "3.2" }, { "text": "1 signal-1 code_flag-1 water tank-1 \u2026 tank-2 \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "signal_flag-", "sec_num": null }, { "text": "In the previous steps, many Chinese-English pairs have been tagged with WordNet senses. In these tagged instances, we found that some Chinese words were always tagged with the same synset, although they may have many different English translations, and these English words may be ambiguous themselves. The untagged translations of the Chinese word can be tagged with the same synset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "signal_flag-", "sec_num": null }, { "text": "For example, as shown in Figure 18 , \u9632\u6ce2\u5824 has many different translations and some of them are ambiguous in WordNet, (groin has 3 senses in WordNet). In fact, those seemingly different senses tagged by previous steps actually are indexed by the same synset in WordNet, so we guess that \u9632\u6ce2\u5824 has mono sense and will be tagged the same synset for all instances.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 34, "text": "Figure 18", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "signal_flag-", "sec_num": null }, { "text": "English word Sense ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese word", "sec_num": null }, { "text": "In the experiment of word alignment, we extract 840,187 English-Chinese translation pairs which contain 445,830 Chinese word types and 318,048 English word types. On average, each Chinese word has 1.88 English translations while each English word has 2.64 Chinese translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4." }, { "text": "In word sense disambiguation, 124,752 Chinese words were linked to 42,589 WordNet synsets, which contain 165,775 (Chinese word, synset) translation pairs. On average, each Chinese word was discovered to have 1.33 senses in terms of WordNet synsets. In the following subsection, we will evaluate the performance of the word alignments and WSD results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4." }, { "text": "In order to evaluate the performance of word alignment, we randomly select 500 term pairs from a terminology bank and align them manually as the gold standard, As single-morpheme terms do not need to be aligned, compound words were considered only. We follow the evaluation method defined by [Och and Ney 2000] , which defined precision, recall and alignment error rate (AER) as follows:", "cite_spans": [ { "start": 292, "end": 310, "text": "[Och and Ney 2000]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results of Word Alignment", "sec_num": "4.1" }, { "text": "recall = | | | | A S S \u2229 , precision = | | | | A P A \u2229 , AER = | | | | 1 | | | | A S A P A S + \u2212 + \u2229 \u2229 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of Word Alignment", "sec_num": "4.1" }, { "text": "where S denotes the annotated set of sure alignments, P denotes the annotated set of possible alignments, and A denotes the set of alignments produced by the alignment method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of Word Alignment", "sec_num": "4.1" }, { "text": "The results are shown in Table 1 . The recall and precision figures show that the word alignment results are quite accurate. As we expected, the word alignment in phrases is much easier and accurate than in complete sentences. Note that the f-scores of word alignment tasks in complete sentences, even the current state-of-the-art alignments for naturally related languages such as English and French, are still less than 95 [Blunsom et al. 2006] . The main alignment errors are caused by the following reasons as shown in Table 2 . The first error type was caused by the errors of word segmentation. For example, \u897f\u6d0b\uf96b should be segmented as \u897f\u6d0b \uf96b instead of \u897f \u6d0b\uf96b and \u518d\u751f\u6c23 should be segmented as \u518d\u751f \u6c23 instead of \u518d \u751f\u6c23. The second error type was the mapping of transliterations which is a different type of word alignment. The third type was caused by the asymmetric translation of the data. For example, in the term pair (navigation star, \u822a\ufa08 \uf96b\u8003 \u661f), the Chinese word \uf96b \u8003 has no appropriate mapping in the English portion. The fourth type was caused by abbreviation which is also a difficult problem in regards to word alignment.", "cite_spans": [ { "start": 425, "end": 446, "text": "[Blunsom et al. 2006]", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 1", "ref_id": "TABREF8" }, { "start": 523, "end": 530, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results of Word Alignment", "sec_num": "4.1" }, { "text": "Since the goal of these experiments is to build a Chinese WordNet automatically, we concerned more with the quality of WSD than the quantity. To evaluate the accuracy of these heuristic rules, we randomly selected 200 sense tagged words for each heuristic rule and checked the sense of each word manually. The accuracy rate of WSD results are defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result of Word Sense Disambiguation", "sec_num": "4.2" }, { "text": "accuracy rate = # of selected words with correct sense # of selected words .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result of Word Sense Disambiguation", "sec_num": "4.2" }, { "text": "The accuracy of each heuristic rule is shown in Table 3 . It shows that the accuracy of heuristic rules is all over 80 %. Note that, in the lexical sample tasks of Senseval 3 [Mihalcea et al. 2004] , the precision of the best supervised WSD methods is less than 73%, the unsupervised methods are even worse. Furthermore, these methods depend highly on the contexts of target words, which is not suitable in these experiments. These are the reasons why we use the heuristic rules instead of conventional WSD methods. We also concerned with how many WordNet senses can be linked with Chinese words. There are two coverage rates, defined as follows: coverage rate of word-sense pairs = # of word sense pairs are linked # of word sense pairs in WordNet , coverage rate of synsets = # of synsets are linked # of synsets in WordNet .", "cite_spans": [ { "start": 175, "end": 197, "text": "[Mihalcea et al. 2004]", "ref_id": null } ], "ref_spans": [ { "start": 48, "end": 55, "text": "Table 3", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Result of Word Sense Disambiguation", "sec_num": "4.2" }, { "text": "In the WSD steps, 484,771 tokens are tagged with WordNet synsets, in which 54,654 distinct word-sense pairs are contained. In other words, there are 54,654 distinct word-sense pairs which are linked with any Chinese word. The coverage of word-sense pairs and synsets are shown in Table 4 . The synset coverage of heuristic rule 3 is not listed in the table, because it just tags the Chinese words which have been disambiguated in the previous steps and does not link any Chinese word with new synset. The table shows that the coverage of word-sense pairs in WordNet 2.0 is 26.9% and the coverage of synsets is 36.89 %. It seems the coverage of the experiments is too low. One possible reason is that most of the synsets in WordNet are infrequent. To prove this phenomenon, we use the frequencies of each sense provided by WordNet, which are the occurrence frequencies for each synset in the SemCor Corpus. As per analysis, there are 115,423 synsets in WordNet 2.0, but only 28,688 (24.8%) synsets appear in the SemCor. It shows that most of the senses are low frequency senses in WordNet.", "cite_spans": [], "ref_spans": [ { "start": 280, "end": 287, "text": "Table 4", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Result of Word Sense Disambiguation", "sec_num": "4.2" }, { "text": "Another issue is that, the coverage is contributed mostly by monosemous words. About 17% of words are ambiguous in WordNet. It seems that there is still room to improve.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result of Word Sense Disambiguation", "sec_num": "4.2" }, { "text": "In this paper, we propose a methodology to extract Chinese-English translation pairs from a large-scale bilingual terminology bank, and link the translation pairs to WordNet synsets. We faced two problems in this study: 1. Word-to-word alignment for each entry in the terminology bank, which helps to extract corresponding English translations for each Chinese word. 2. Word sense disambiguation, which helps to select the appropriate sense when the English translation of a Chinese word is ambiguous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Researches", "sec_num": "5." }, { "text": "The evaluation of the experiments shows that the f-score of word alignment archives 98.4%. In the word sense disambiguation stage, the word-sense pairs extracted from the terminology bank cover 26.9% of WordNet word-sense pairs. Also, the distinct senses cover 36.89% of WordNet synsets. The accuracy of the three heuristic rules achieves 80%, 83 %, and 87 %.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Researches", "sec_num": "5." }, { "text": "A bilingual terminology bank provides some advantages over a bilingual parallel corpus for extracting information. For example, we can extract more Chinese-English translation pairs through the various appearances of a word which is contained in different compounds. The other advantage is that most of compound words in terminology bank are composed of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Researches", "sec_num": "5." } ], "back_matter": [ { "text": "only 2-3 words, which results in the word alignment accuracy of a terminology bank being much higher than a bilingual corpus.In the future we will try to use some other word sense disambiguation methods to increase the coverage of words and senses in WordNet and to extract more information from terminology bank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Combining Multiple Methods for the Automatic Construction of Multilingual WordNets", "authors": [ { "first": "J", "middle": [], "last": "Atserias", "suffix": "" }, { "first": "S", "middle": [], "last": "Climent", "suffix": "" }, { "first": "X", "middle": [], "last": "Farreres", "suffix": "" }, { "first": "G", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "H", "middle": [], "last": "Rodr\u00edguez", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "143--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atserias, J., S. Climent, X. Farreres, G. Rigau and H. Rodr\u00edguez, \"Combining Multiple Methods for the Automatic Construction of Multilingual WordNets,\" In Proceedings of the International Conference on Recent Advances in Natural Language Processing, 1997, Tzigov Chark, Bulgaria, pp. 143-149.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unsupervised Sense Disambiguation Using Bilingual Probabilistic Models", "authors": [ { "first": "I", "middle": [], "last": "Bhattacharya", "suffix": "" }, { "first": "L", "middle": [], "last": "Getoor", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "287--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhattacharya, I., L. Getoor and Y. Bengio, \"Unsupervised Sense Disambiguation Using Bilingual Probabilistic Models, \" In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, 2004, Barcelona, Spain, pp. 287-294.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discriminative Word Alignment with Conditional Random Fields", "authors": [ { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "T", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blunsom, P. and T. Cohn, \"Discriminative Word Alignment with Conditional Random Fields,\" In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, 2006, Sydney, Australia, pp. 65-72.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The Mathematics of Machine Translation: Parameter Estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "A D" ], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "J D" ], "last": "Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, P.F., S.A.D. Pietra, V.J.D. Pietra, and R.L. Mercer, \"The Mathematics of Machine Translation: Parameter Estimation,\" Computational Linguistics, 19(2), 1993, pp. 263-311.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Building A Chinese WordNet Via Class-Based Translation Model", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" }, { "first": "T", "middle": [], "last": "Lin", "suffix": "" }, { "first": "G.-N", "middle": [], "last": "You", "suffix": "" }, { "first": "T", "middle": [ "C" ], "last": "Chuang", "suffix": "" }, { "first": "C.-T", "middle": [], "last": "Hsieh", "suffix": "" } ], "year": 2003, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "8", "issue": "2", "pages": "61--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, J.S., T. Lin, G.-N. You, T.C. Chuang and C.-T. Hsieh, \"Building A Chinese WordNet Via Class-Based Translation Model,\" International Journal of Computational Linguistics and Chinese Language Processing, 8(2), 2003, pp. 61-76.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Mapping Multilingual Hierarchies using Relaxation Labelling", "authors": [ { "first": "J", "middle": [], "last": "Daud\u00e9", "suffix": "" }, { "first": "L", "middle": [], "last": "Padr\u00f3", "suffix": "" }, { "first": "G", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 1999, "venue": "Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daud\u00e9, J., L. Padr\u00f3 and G. Rigau, \"Mapping Multilingual Hierarchies using Relaxation Labelling,\" In Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 1999, College Park, Maryland.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dempster, A.P., N.M. Laird and D.B. Rubin, \"Maximum likelihood from incomplete data via the EM algorithm,\" Journal of the Royal Statistical Society, 39(1), 1977, pp. 1-38.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An Unsupervised Method for Word Sense Tagging using Parallel Corpora", "authors": [ { "first": "M", "middle": [], "last": "Diab", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "255--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diab, M. and P. Resnik, \"An Unsupervised Method for Word Sense Tagging using Parallel Corpora,\" In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 2002, NJ, USA, pp. 255-262.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The SENSEVAL-3 English Lexical Sample Task", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "T", "middle": [], "last": "Chklovski", "suffix": "" } ], "year": null, "venue": "Proceedings of Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text", "volume": "", "issue": "", "pages": "25--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihalcea, R. and T. Chklovski, \"The SENSEVAL-3 English Lexical Sample Task,\" In Proceedings of Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, Barcelona, Spain, pp. 25-28.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "WordNet: An online lexical database", "authors": [ { "first": "G", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "3", "issue": "4", "pages": "235--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, G., \"WordNet: An online lexical database,\" International Journal of Lexicography, 3(4), 1990, pp. 235-312.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improved Statistical Alignment Models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "N", "middle": [], "last": "Hermann", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F.J. and Hermann N., \"Improved Statistical Alignment Models,\" In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, 2000, Hong Kong, pp. 440-447.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Longman English-Chinese Dictionary of Contemporary English", "authors": [ { "first": "P", "middle": [], "last": "Proctor", "suffix": "" } ], "year": 1988, "venue": "Longman Group (Far East) Ltd", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Proctor, P., \"Longman English-Chinese Dictionary of Contemporary English,\" Longman Group (Far East) Ltd., Hong Kong, 1988.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "HMM-based word alignment in statistical translation", "authors": [ { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16th conference on Computational linguistics", "volume": "", "issue": "", "pages": "836--841", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vogel, S., H. Ney, C. Tillmann, \"HMM-based word alignment in statistical translation,\" In Proceedings of the 16th conference on Computational linguistics, 1996, Morristown, NJ, pp. 836-841.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "sample entries of the Bilingual Terminology Bank from NICT.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "two possible alignments from English to Chinese for the term pair (practice teaching, \u6559\u5b78 \u5be6\u7fd2).", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": "", "num": null, "type_str": "figure" }, "FIGREF11": { "uris": null, "text": "translation probabilities for teaching.", "num": null, "type_str": "figure" }, "FIGREF12": { "uris": null, "text": "example of cross connection, \u6821 and \u8003\u8a66 connected to examination while \u5916 connected to external. example of cross connection: the translation probabilities of the example, it shows that \u6821 is more likely to connect to examination.", "num": null, "type_str": "figure" }, "FIGREF13": { "uris": null, "text": "of is part of compound.", "num": null, "type_str": "figure" }, "FIGREF14": { "uris": null, "text": "of is not part of compound.", "num": null, "type_str": "figure" }, "FIGREF16": { "uris": null, "text": "is another example of the same problem from English to Chinese.", "num": null, "type_str": "figure" }, "FIGREF17": { "uris": null, "text": "cedar can not be connected by both \u96ea and \u677e in this direction.", "num": null, "type_str": "figure" }, "FIGREF18": { "uris": null, "text": "\u96ea and \u677e were not merged in step 1 while universal gravitation was merged in the same step.", "num": null, "type_str": "figure" }, "FIGREF19": { "uris": null, "text": "step 2, \u96ea and \u677e were merged in step 2 and universal gravitation was treated as a unit in the same step.", "num": null, "type_str": "figure" }, "FIGREF20": { "uris": null, "text": "the translations of \u4fe1\u865f\u65d7, {signal, signal flag, code flag}, are nearly share the same sense.", "num": null, "type_str": "figure" }, "FIGREF22": { "uris": null, "text": "the possible translations of \u9632\u6ce2\u5824 and its sense tagged by the previous steps.", "num": null, "type_str": "figure" }, "TABREF4": { "html": null, "text": "", "num": null, "content": "
EnglishChineseP( c | e )
practice\uf996\u7fd20.163636
practice\u5be6\u7fd20.093320
t 0.058102 t t Z v teaching teaching \u5be6\u7fd2 T v C t t Z = \u2208 = \u2211 \u2211 \u2211 practice \u6f14\u7fd2 e c practice \u5be6\u52d9 0.056980 e c practice \u64cd\u4f5c 0.051331=0.95 121.88= 0.00779.
The training process will count the expected number and re-estimate the translation practice \u512a\uf97c 0.042036
iteratively until it has converged. There are some translation probabilities estimated in this experiment shown in Figures 3-6. practice \u4f5c\u696d 0.038144
practice\u65b9\u6cd50.036161
English water practice water practice water Figure 5. translation probabilities for practice. Chinese P( c | e ) \u5be6\u4f5c 0.034805 \u6c34 0.599932 \u5be6\u969b 0.025800 \u6c34\u4f4d 0.048781 \u6c34\u5206 0.011677
water English\u7528\u6c34 Chinese0.011427 P( c | e )
water teaching\u5730\u4e0b\u6c34 \u6559\u5b780.010800 0.698757
water teaching\u6c34\u58d3 \u6559\u5b78\u6cd50.009310 0.137614
water teaching\u6c34\uf97e \u6559\u67500.007905 0.045780
water teaching\u6c34\u7ba1 \u55ae\u51430.007640 0.015502
water teaching\u4f4d \u6559\u51770.007471 0.010315
water teaching\u6c34\u9762 \u6559\u5c0e0.006704 0.007246
teaching\u6559\u67030.007246
English teachingChinese \u6559\u6388P( c | e ) 0.007246
tank teaching\u69fd \u6559\u8a130.292606 0.007246
tank teaching\u6ac3 \u65590.176049 0.007246
tank\u82590.077515
tank\u7bb10.034325
tank\u6c340.025067
tank\u6db20.018411
tank\u6c34\u69fd0.016570
tank\u6c600.016157
tank\u7f500.015687
tank\u6c34\u7bb10.012206
Figure 4. translation probabilities for tank.
", "type_str": "table" }, "TABREF8": { "html": null, "text": "", "num": null, "content": "
recallprecisionf-scoreAER
98.298.698.41.6
Table 2. typical errors of word alignment.
Error TypeError Samples
half-wave/\u534a length/\u6ce2\u9577 criterion/\u6e96\u5247
spiral/\uf911\u65cb coal/\u7164\u6a5f cleaner/\u6d17
Word Segmentationamerican/\u897f ginseng/\u6d0b\uf96b second/\u518d wind/\u751f\u6c23
microlen/\u5fae\u900f\u93e1\u85d5 coupler/\u5408\u5668
atomic/\u539f\u5b50\u80fd energy/\u968e
transliterationsan/\u8056\u80e1 julian/\uf99a\u5b89
asymmetric translationnavigation/\u822a\ufa08\uf96b\u8003 star/\u661f
abbreviationdouble/ III/\u6258\u514b\u99ac\u514b\u71b1\u6838\u53cd\u61c9\u5668
", "type_str": "table" }, "TABREF9": { "html": null, "text": "", "num": null, "content": "
# words#words with correct senseaccuracy rate
Heuristic 120016080.0 %
Heuristic 220016783.5 %
Heuristic 320017487.0 %
", "type_str": "table" }, "TABREF10": { "html": null, "text": "", "num": null, "content": "
#tokens#word-sense pairsword-sense pair coverage#synsetssynset coverage
monosemous word370,99148,62323.94 %39,953 34.61 %
Heuristic 129,4224,2112.07 %3,4522.99 %
Heuristic 229,3112,0501.00 %1,6851.46 %
Heuristic 381,7341,9310.95 %--
Total484,77154,65426.90 %42,589 36.89 %
", "type_str": "table" } } } }