{ "paper_id": "O03-1001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:01:29.753223Z" }, "title": "Word-Transliteration Alignment", "authors": [ { "first": "Tracy", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiao Tung University", "location": { "addrLine": "Ta Hsueh Road", "postCode": "1001, 300", "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "tracylin@cm.nctu.edu.tw" }, { "first": "Chien-Cheng", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": { "addrLine": "101, Kuangfu Road, Hsinchu, 300", "country": "Taiwan" } }, "email": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": { "addrLine": "101, Kuangfu Road, Hsinchu, 300", "country": "Taiwan" } }, "email": "jschang@cs.nthu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The named-entity phrases in free text represent a formidable challenge to text analysis. Translating a named-entity is important for the task of Cross Language Information Retrieval and Question Answering. However, both tasks are not easy to handle because named-entities found in free text are often not listed in a monolingual or bilingual dictionary. Although it is possible to identify and translate named-entities on the fly without a list of proper names and transliterations, an extensive list certainly will ensure the high accuracy rate of text analysis. We use a list of proper names and transliterations to train a Machine Transliteration Model. With the model it is possible to extract proper names and their transliterations in a bilingual corpus with high average precision and recall rates.", "pdf_parse": { "paper_id": "O03-1001", "_pdf_hash": "", "abstract": [ { "text": "The named-entity phrases in free text represent a formidable challenge to text analysis. Translating a named-entity is important for the task of Cross Language Information Retrieval and Question Answering. However, both tasks are not easy to handle because named-entities found in free text are often not listed in a monolingual or bilingual dictionary. Although it is possible to identify and translate named-entities on the fly without a list of proper names and transliterations, an extensive list certainly will ensure the high accuracy rate of text analysis. We use a list of proper names and transliterations to train a Machine Transliteration Model. With the model it is possible to extract proper names and their transliterations in a bilingual corpus with high average precision and recall rates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Multilingual named entity identification and (back) transliteration has been increasingly recognized as an important research area for many applications, including machine translation (MT), cross language information retrieval (CLIR), and question answering (QA). These transliterated words are often domainspecific and many of them are not found in existing bilingual dictionaries. Thus, it is difficult to handle transliteration only via simple dictionary lookup. For CLIR, the accuracy of transliteration highly affects the performance of retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Transliteration of proper names tends to be varied from translator to translator. Consensus on transliteration of celebrated place and person names emerges over a short period of inconsistency and stays unique and unchanged thereafter. But for less known persons and unfamiliar places, the transliterations of names may vary a great deal. That is exacerbated by different systems used for Ramanizing Chinese or Japanese person and place names. For back transliteration task of converting many transliterations back to the unique original name, there is one and only solution. So back transliteration is considered more difficult than transliteration. Knight and Graehl (1998) pioneered the study of machine transliteration and proposed a statistical transliteration model from English to Japanese to experiment on back transliteration of Japanese named entities. Most previous approaches to machine transliteration (Al-Onaizan and Knight, 2002; Chen et al., 1998; Lin and Chen, 2002) ; English/Japanese (Knight and Graehl, 1998; Lee and Choi, 1997; Oh and Choi, 2002) focused on the tasks of transliteration and back-transliteration. Very little has been touched upon for the issue of aligning and acquiring words and transliterations in a parallel corpus.", "cite_spans": [ { "start": 651, "end": 675, "text": "Knight and Graehl (1998)", "ref_id": "BIBREF8" }, { "start": 931, "end": 944, "text": "Knight, 2002;", "ref_id": "BIBREF0" }, { "start": 945, "end": 963, "text": "Chen et al., 1998;", "ref_id": "BIBREF1" }, { "start": 964, "end": 983, "text": "Lin and Chen, 2002)", "ref_id": "BIBREF10" }, { "start": 1003, "end": 1028, "text": "(Knight and Graehl, 1998;", "ref_id": "BIBREF8" }, { "start": 1029, "end": 1048, "text": "Lee and Choi, 1997;", "ref_id": "BIBREF9" }, { "start": 1049, "end": 1067, "text": "Oh and Choi, 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The alternative to on-the-fly (back) machine transliteration is simple lookup in an extensive list automatically acquired from parallel corpora. Most instances of (back) transliteration of proper names can often be found in a parallel corpus of substantial size and relevant to the task. For instance, fifty topics of the CLIR task in the NTCIR 3 evaluation conference contain many named entities (NEs) that require (back) transliteration. The CLIR task involves document retrieval from a collection of late 1990s news articles published in Taiwan. Most of those NEs and transliterations can be found in the articles from the Sinorama Corpus of parallel Chinese-English articles dated from 1990 to 2001, including \"Bill Clinton,\" \"Chernobyl,\" \"Chiayi,\" \"Han dynasty,\" \"James Soong,\" \"Kosovo,\" \"Mount Ali,\" \"Nobel Prize,\" \"Oscar,\" \"Titanic,\" and \"Zhu Rong Ji.\" Therefore it is important for CLIR research that we align and extract words and transliterations in a parallel corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we propose a new machine transliteration method based on a statistical model trained automatically on a bilingual proper name list via unsupervised learning. We also describe how the parameters in the model can be estimated and smoothed for best results. Moreover, we show how the model can be applied to align and extract words and their transliterations in a parallel corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The remainder of the paper is organized as follows: Section 2 lays out the model and describes how to apply the model to align word and transliteration. Section 3 describes how the model is trained on a set of proper names and transliterations. Section 4 describes experiments and evaluation. Section 5 contains discussion and we conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We will first illustrate our approach with examples. A formal treatment of the approach will follow in Section 2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Transliteration Model", "sec_num": "2." }, { "text": "Consider the case where one is to convert a word in English into another language, says Chinese, based on its phonemes rather than meaning. For instance, consider transliteration of the word \"Stanford,\" into Chinese. The most common transliteration of \"Stanford\" is \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "2.1" }, { "text": ".\" (Ramanization: [shi-dan-fo]). We assume that transliteration is a piecemeal, statistical process, converting one to six letters at a time to a Chinese character. For instance, to transliterate \"Stanford,\" the word is broken into \"s,\" \"tan,\" \"for,\" and \"d,\" which are converted into zero to two Chinese characters independently. Those fragments of the word in question are called transliteration units (TUs). In this case, the TU \"s\" is converted to the Chinese character \" ,\" \"tan\" to \" ,\" \"for\" to \" ,\" and \"d\" to the empty string \u03bb. In other words, we model the transliteration process based on independence of conversion of TUs. Therefore, we have the transliteration probability of getting the transliteration \" \" given \"Stanford,\" P( | Stanford),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "2.1" }, { "text": "P( | Stanford) = P( | s) P( | tan) P( | for) P( \u03bb | d)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "2.1" }, { "text": "There are several ways such a machine transliteration model (MTM) can be applied, including (1) transliteration of proper names (2) back transliteration to the original proper name (3) wordtransliteration alignment in a parallel corpus. We formulate those three problems based on the probabilistic function under MTM:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "2.1" }, { "text": "Transliteration problem (TP)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "2.1" }, { "text": "Given a word w (usually a proper noun) in a language (L1), produce automatically the transliteration t in another language (L2). For instance, the transliterations in (2) are the results of solving the TP for four given words in (1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "2.1" }, { "text": "(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1) Berg, Stanford, Nobel, (2) , ,", "eq_num": ", Tsing Hua" } ], "section": "Examples", "sec_num": "2.1" }, { "text": "Given a transliteration t in a language (L2), produce automatically the original word w in (L1) that gives rise to t. For instance, the words in (4) are the results of solving the BTP for two given transliterations in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Back transliteration Problem (BTP)", "sec_num": null }, { "text": "(3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Back transliteration Problem (BTP)", "sec_num": null }, { "text": "(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Back transliteration Problem (BTP)", "sec_num": null }, { "text": "3) , Lin Ku-fang (4) Michelangelo,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Back transliteration Problem (BTP)", "sec_num": null }, { "text": "Given a pair of sentence and translation counterpart, align the words and transliterations therein. For instance, given (5a) and (5b), the alignment results are the three word-transliteration pairs in (6), while the two pairs of word and back transliteration in (8) are the results of solving WTAP for (7a) and (7b) (5a) Paul Berg, professor emeritus of biology at Stanford University and a Nobel laureate, \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Transliteration Alignment Problem (WTAP)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 (6) (Stanford, ), (Nobel, ), (Berg, )", "eq_num": "(5b)" } ], "section": "Word Transliteration Alignment Problem (WTAP)", "sec_num": null }, { "text": "PRC premier Zhu Rongji's saber-rattling speech on the eve of the election is also seen as having aroused resentment among Taiwan's electorate, and thus given Chen Shui-bian a last-minute boost.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Transliteration Alignment Problem (WTAP)", "sec_num": null }, { "text": "(7b) 2 (8) (Zhu Rongji, ), (Chen Shui-bian, )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Transliteration Alignment Problem (WTAP)", "sec_num": null }, { "text": "Both transliteration and back transliteration are important for machine translation and cross language information retrieval. For instance, the person and place names are likely not listed in a dictionary, therefore should be mapped to the target language via run-time transliteration. Similarly, a large percentage of keywords in a cross language query are person and place names. It is important for an information system to produce appropriate counterpart names in the language of documents being searched. Those counterparts can be obtained via direct transliteration based on the machine transliteration and language models (of proper names in the target language).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Transliteration Alignment Problem (WTAP)", "sec_num": null }, { "text": "The memory-based alternative is to find those word-transliteration in the aligned sentences in a parallel corpus (Chuang, You, and Chang 2002) . Word-transliteration alignment problem certainly can be dealt with based on lexical statistics (Gale and Church 1992; Melamed 2000). However, lexical statistics is known to be very ineffective for low-frequency words (Dunning 1993). We propose to attack WTAP at the sub-lexical, phoneme level.", "cite_spans": [ { "start": 113, "end": 142, "text": "(Chuang, You, and Chang 2002)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Word Transliteration Alignment Problem (WTAP)", "sec_num": null }, { "text": "We propose a new way for modeling transliteration of an English word w into Chinese t via a Machine Transliteration Model. We assume that transliteration is carried out by decomposing w into k translation units (TUs), \u03c9 1 , \u03c9 2 , \u2026, \u03c9 k which are subsequently converted independently into \u03c4 1 , \u03c4 2 , \u2026, \u03c4 k respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "Finally, \u03c4 1 , \u03c4 2 , \u2026, \u03c4 k are put together, forming t as output. Therefore, the probability of converting w into t can be expressed as P( Figure 1 for more details.", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 148, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "t | w) = ) | ( max , 1 ... , ... , k 1 k 1 i i k i k P \u03c9 \u03c4 \u03c4 \u03c4 \u03c9 \u03c9 = \u03a0 , where w = \u03c9 1 \u03c9 2 \u2026\u03c9 k , t = \u03c4 1 \u03c4 2 \u2026\u03c4 k , |t| \u2264 k \u2264 |t|+|w|, \u03c4 i \u03c9 i \u2260 \u03bb. See Equation (1) in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "Based on MTM, we can formulate the solution to the Transliteration Problem by optimizing P(t | w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "for the given w. On the other hand, we can formulate the solution to the Back Transliteration Problem by optimizing P(t | w) P( w) for the given t. See Equations (2) through (4) in Figure 1 for more details.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 189, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "The word-transliteration alignment process may be handled by first finding the proper names in English and matching up with the transliteration for each proper name. For instance, consider the following sentences in the Sinorama Corpus:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "(9c) \u4f60 \u4f60 (9e) \"When you understand all about the sun and all about the atmosphere and all about the rotation of the earth, you may still miss the radiance of the sunset.\" So wrote English philosopher Alfred North Whitehead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "It is not difficult to build part of speech tagger or named entity recognizer for finding the following proper names (PN):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "(10a) Alfred, (10b) North, (10c) Whitehead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "We use Equation 5in Figure 1 to model the alignment of a word w and its transliteration t in s based on the alignment probability P(s, w) which is the product of transliteration probability P(\u03c3 | \u03c9) and a trigram match probability, P(m", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 28, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "i | m i-2 , m i-1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": ", where m i is the type of the i-th match in the alignment path.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "We define three match types based on lengths a and b, a ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "= | \u03c4 |, b = | \u03c9 |: match(a, b) = H if a = 0, match(a, b) = V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "2.2" }, { "text": "The probability of transliteration t of the word w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(t | w) = ,", "eq_num": "(1)" } ], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": ") | ( , 1 ... , ... , max k 1 k 1 i i k i k P \u03c9 \u03c4 \u03c4 \u03c4 \u03c9 \u03c9 \u03a0 = where w = \u03c9 1 \u03c9 2 \u2026 \u03c9 k , t = \u03c4 1 \u03c4 2 \u2026\u03c4 k , | t | \u2264 k \u2264 | t | + | w |, | \u03c4 i \u03c9 i | \u2265 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "TRANSLITERATION: Produce the phonetic translation equivalent t for the given word w t = arg", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(t | w)", "eq_num": "(2)" } ], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "t max BACK TRANSLITERATION: Produce the original word w for the given transliteration t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "P(w | t) = ) P( ) P( ) | P( t w w t (3) w = ) P( ) | P( max arg ) P( ) P( ) | P( max arg w w t t w w t t t = (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "WORD-TRANSLITERATION ALIGNMENT: Align a word w with its transliteration t in a sentence s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(s, w) = P(\u03c3 \u03a0 = k i k , 1 ... , ... , max k 1 k 1 \u03c3 \u03c3 \u03c9 \u03c9 i | \u03c9 i ) P(m i | m i-2 , m i-1 ),", "eq_num": "(5)" } ], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "where w = \u03c9 1 \u03c9 2 ...\u03c9 \u03ba , s = \u03c3 1 \u03c3 2 ...\u03c3 \u03ba , (both \u03c9 i and \u03c3 i can be empty) To compute the alignment probability efficiently, we need to define and calculate the forward probability \u03b1(i, j) of P(s, w) via dynamic programming (Manning and Schutze 1999) , \u03b1(i, j) denotes the probability of aligning the first i Chinese characters of s and the first j English letters of w. For the match type trigram in Equation 5and 8, we need also compute \u00b5(i, j), the types of the last two matches in the Viterbi alignment path. See Equations (5) through (9) in Figure 1 for more details.", "cite_spans": [ { "start": 229, "end": 255, "text": "(Manning and Schutze 1999)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 551, "end": 559, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "| s | \u2264 k \u2264 | w | + | s |, |\u03c9 i \u03c3 i | \u2265 1, m i is the type of the (\u03c9 i , \u03c3 i ) match, m i = match (|\u03c9 i |, | \u03c3 i | ),", "eq_num": "match" } ], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "For instance, given w = \"Whitehead\" and s = \" \u4f60 \u4f60 ,\" the best Viterbi path indicates a decomposition of word \"Whitehead\" into four TUs, \"whi,\" \"te,\" \"hea,\" and \"d\" matching \" ,\" \u03bb, \" ,\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "\" \" respectively. By extracting the sequence of Dand V-matches, we generate the result of wordtransliteration alignment. For instance, we will have ( , Whitehead) as the output. See Figure 2 for more details.", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 190, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "MACHINE TRANSLITERATION MODEL:", "sec_num": null }, { "text": "In the training phase, we estimate the transliteration probability function P(\u03c4 | \u03c9), for any given TU \u03c9 and transliteration character \u03c4, based on a given list of word-transliterations. Based on the Expectation Maximization (EM) algorithm with Viterbi decoding (Forney, 1973) , the iterative parameter estimation procedure on a training data of word-transliteration list, (E k , C k ), k = 1 to n is described as follows:", "cite_spans": [ { "start": 261, "end": 275, "text": "(Forney, 1973)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Estimation of Model Parameters", "sec_num": "3." }, { "text": "Initialization Step:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation of Model Parameters", "sec_num": "3." }, { "text": "Initially, we have a simple model P 0 (\u03c4 | \u03c9) ,' we have and R(\u03c4 1 ) = 'na' and R(\u03c4 2 ) = 'ya' under Yanyu Pinyin Romanization System. Therefore, breaking up w into two TUs, \u03c9 1 = 'nay' \u03c9 2 = 'yar' is most probable, since that maximizes P 0 (\u03c4 1 | \u03c9 1 ) \u00d7 P 0 (\u03c4 2 | \u03c9 2 ) P 0 (\u03c4 1 | \u03c9 1 )= sim( na | nay) = 2 \u00d7 2 / (2+3) = 0.8 P 0 (\u03c4 2 | \u03c9 2 )= sim( ya | yar) = 2 \u00d7 2 / (2+3) = 0.8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation of Model Parameters", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P 0 (\u03c4 | \u03c9) = sim( R(\u03c4) | \u03c9) = dice(t 1 t 2 \u2026t a , w 1 w 2 \u2026w b )", "eq_num": "(8)" } ], "section": "Estimation of Model Parameters", "sec_num": "3." }, { "text": "In the Expectation Step, we find the best way to describe how a word get transliterated via decomposition into TUs which amounts to finding the best Viterbi path aligning TUs in E k and characters in C k for all pairs (E k , C k ), k = 1 to n, in the training set. This can be done using Equations (5) through (9). In the training phase, we have slightly different situation of s = t. Table 1 . The results of using P 0 (\u03c4 |\u03c9) to align TUs and transliteration characters w s=t \u03c9-\u03c4 match on Viterbi path", "cite_spans": [], "ref_spans": [ { "start": 385, "end": 392, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Expectation Step:", "sec_num": null }, { "text": "The Viterbi path can be found via a dynamic programming process of calculating the forward probability function \u03b1(i, j) of the transliteration alignment probability P(E k , C k ) for 0 < i < | C k | and 0 < j < | E k |. After calculating P(C k , E k ) via dynamic programming, we also obtain the TU matches (\u03c4, \u03c9) on the Viterbi path. After all pairs are processed and TUs and translation characters are found, we then reestimate the transliteration probability P(\u03c4 | \u03c9) in the Maximization Step", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expectation Step:", "sec_num": null }, { "text": "Step: Based on all the TU alignment pairs obtained in the Expectation Step, we update the maximum likelihood estimates (MLE) of model parameters using Equation 9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximization", "sec_num": null }, { "text": "\u2211 \u2211 \u2211 \u2211 = = = n i C E n i C E MLE 1 ) , ( in matches ' 1 ) , ( in matches ) count( ) , count( ) | ( P i i i i \u03c9 \u03c9 \u03c4 \u03c9 \u03c4 \u03c9 \u03c4 \u03c9 \u03c4 (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximization", "sec_num": null }, { "text": "The Viterbi EM algorithm iterates between the Expectation Step and Maximization Step, until a stopping criterion is reached or after a predefined number of iterations. Re-estimation of P(\u03c4 | \u03c9) leads to convergence under the Viterbi EM algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximization", "sec_num": null }, { "text": "The maximum likelihood estimate is generally not suitable for statistical inference of parameters in the proposed machine transliteration model due to data sparseness (even if we use a longer list of names for training, the problem still exists). MLE is not capturing the fact that there are other transliteration possibilities that we may have not encountered. For instance, consider the task of aligning the word \"Michelangelo\" and the transliteration \" \" in Example (11):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Smoothing", "sec_num": "3.1" }, { "text": "(11) (Michelangelo, )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Smoothing", "sec_num": "3.1" }, { "text": "It turns out in the model trained on some word-transliteration data provides the MLE parameters in the MTM in Table 2 . Understandably, the MLE-based model assigns 0 probability to a lot of cases not seen in the training data and that could lead to problems in word-transliteration alignment. For instance, relevant parameters for Example (11) such as P( | che) and P( | lan) are given 0 probability. Good Turing estimation is one of the most commonly used approaches to deal with the problems caused by data sparseness and zero probability. However, GTE assigns identical probabilistic values to all unseen events, which might lead to problem in our case. We observed that although there is great variation in Chinese transliteration characters for any given English word, the initial, mostly consonants, tend to be consistent. See Table 3 for ", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 117, "text": "Table 2", "ref_id": "TABREF0" }, { "start": 833, "end": 844, "text": "Table 3 for", "ref_id": null } ], "eq_spans": [], "section": "Parameter Smoothing", "sec_num": "3.1" }, { "text": "We have carried out rigorous evaluation on an implementation of the method proposed in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and evaluation", "sec_num": "4" }, { "text": "Close examination of the experimental results reveal that the machine transliteration is general effective in aligning and extracting proper names and their transliterations from a parallel corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and evaluation", "sec_num": "4" }, { "text": "The parameters of the transliteration model were trained on some 1,700 proper names and transliterations from Scientific American Magazine. We place 10 H-matches before and after the Viterbi alignment (1) 200 bilingual examples in Longman Dictionary of Comtemporary Dictionary, English-Chinese Edition. (2) 200 aligned sentences from Scientific American, US and Taiwan Editions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and evaluation", "sec_num": "4" }, { "text": "(3) 200 aligned sentences from the Sinorama Corpus. Table 5 shows that on the average the precision rate of exact match is between 75-90%, while the precision rate for character level partial match is from 90-95%. The average recall rates are about the same as the precision rates. ", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 59, "text": "Table 5", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments and evaluation", "sec_num": "4" }, { "text": "The success of the proposed method for the most part has to do with the capability to balance the conflicting needs of capturing lexical preference of transliteration and smoothing to cope with data sparseness and generality. Although we experimented with a model trained on English to Chinese transliteration, the model seemed to perform reasonably well even with situations in the opposite direction, Chinese to English transliteration. This indicates that the model with the parameter estimation method is very general in terms of dealing with unseen events and bi-directionality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "We have restricted our discussion and experiments to transliteration of proper names. While it is commonplace for Japanese to have transliteration of common nouns, transliteration of Chinese common nouns into English is rare. It seems that is so only when the term is culture-specific and there is no counterparts in the West. For instance, most instances \" \" and \" \" found in the Sinorama corpus are mapped into lower case transliterations as shown in Example (11) and (12):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "(11a) (11b) Are ch'i-p'aos--the national dress of China--really out of fashion?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "(12a) (12b) a scroll of shou chin ti calligraphy Without capitalized transliterations, it remains to be seen how word-transliteration alignment related to common nouns should be handled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "In this paper, we propose a new statistical machine transliteration model and describe how to apply the model to extract words and transliterations in a parallel corpus. The model was first trained on a modest list of names and transliteration. The training resulted in a set of 'syllabus' to character transliteration probabilities, which are subsequently used to extract proper names and transliterations in a parallel corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "These named entities are crucial for the development of named entity identification module in CLIR and QA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "We carried out experiments on an implementation of the word-transliteration alignment algorithms and tested on three sets of test data. The evaluation showed that very high precision rates were achieved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "A number of interesting future directions present themselves. First, it would be interesting to see how effectively we can port and apply the method to other language pairs such as English-Japanese and English-Korean. We are also investigating the advantages of incorporate a machine transliteration module in sentence and word alignment of parallel corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "Scientific American, US and Taiwan editions. What Clones? Were claims of the first human embryo premature? Gary Stix and (Trans.) December 24, 2001.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Sinorama Chinese-English Magazine, A New Leader for the New Century--Chen Elected President, April 2000, p. 13.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We acknowledge the support for this study through grants from National Science Council and Ministry of Education, Taiwan (NSC 90-2411-H-007-033-MC and MOE EX-91-E-FA06-4-4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null }, { "text": "path to simulate the word-transliteration situation and trained the trigram match type probability. Table 4 shows the estimates of the trigram model. The model was then tested on three sets of test data:", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 107, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Translating named entities using monolingual and bilingual resources", "authors": [ { "first": "Y", "middle": [], "last": "Al-Onaizan", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "400--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Al-Onaizan, Y. and K. Knight. 2002. Translating named entities using monolingual and bilingual re- sources. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 400-408.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Proper name translation in cross-language information retrieval", "authors": [ { "first": "H", "middle": [ "H" ], "last": "Chen", "suffix": "" }, { "first": "S-J", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Y-W", "middle": [], "last": "Ding", "suffix": "" }, { "first": "S-C", "middle": [], "last": "Tsai", "suffix": "" } ], "year": 1998, "venue": "Proceedings of 17th COLING and 36th ACL", "volume": "", "issue": "", "pages": "232--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, H.H., S-J Huang, Y-W Ding, and S-C Tsai. 1998. Proper name translation in cross-language infor- mation retrieval. In Proceedings of 17th COLING and 36th ACL, pages 232-236.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Adaptive Bilingual Sentence Alignment", "authors": [ { "first": "T", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "G", "middle": [ "N" ], "last": "You", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" } ], "year": 2002, "venue": "Lecture Notes in Artificial Intelligence", "volume": "2499", "issue": "", "pages": "21--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chuang, T., G.N. You, J.S. Chang (2002) Adaptive Bilingual Sentence Alignment, Lecture Notes in Arti- ficial Intelligence 2499, 21-30.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "What Clones? SCIENTIFIC AMERICAN, Inc", "authors": [ { "first": "J", "middle": [ "B R P" ], "last": "Cibelli", "suffix": "" }, { "first": "M", "middle": [ "D" ], "last": "Lanza", "suffix": "" }, { "first": "C", "middle": [], "last": "West", "suffix": "" }, { "first": "", "middle": [], "last": "Ezzell", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cibelli, J.B. R.P. Lanza, M.D. West, and C. Ezzell. 2002. What Clones? SCIENTIFIC AMERICAN, Inc., New York, January. http://www.sciam.com.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Robust bilingual word alignment for machine aided translation", "authors": [ { "first": "I", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dagan, I., Church, K. W., and Gale, W. A. 1993. Robust bilingual word alignment for machine aided translation. In Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Per- spectives, pages 1-8, Columbus Ohio.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [], "last": "Laird", "suffix": "" }, { "first": "Rubin", "middle": [], "last": "", "suffix": "" }, { "first": "D", "middle": [], "last": "", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society, Series B", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dempster, A., Laird, N., and Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1-38.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dempster, A.P., N.M. Laird, and D.B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(1):1-38.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Viterbi algorithm", "authors": [ { "first": "G", "middle": [ "D" ], "last": "Forney", "suffix": "" } ], "year": 1973, "venue": "Proceedings of IEEE", "volume": "61", "issue": "", "pages": "268--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Forney, G.D. 1973. The Viterbi algorithm. Proceedings of IEEE, 61:268-278, March.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Machine transliteration", "authors": [ { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "J", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "4", "pages": "599--612", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knight, K. and J. Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4):599-612.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A statistical method to generate various foreign word transliterations in multilingual information retrieval system", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Lee", "suffix": "" }, { "first": "K-S", "middle": [], "last": "Choi", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 2nd International Workshop on Information Retrieval with Asian Languages (IRAL'97)", "volume": "", "issue": "", "pages": "123--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, J.S. and K-S Choi. 1997. A statistical method to generate various foreign word transliterations in multilingual information retrieval system. In Proceedings of the 2nd International Workshop on Infor- mation Retrieval with Asian Languages (IRAL'97), pages 123-128, Tsukuba, Japan.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Backward transliteration by learning phonetic similarity", "authors": [ { "first": "W-H", "middle": [], "last": "Lin", "suffix": "" }, { "first": "H-H", "middle": [], "last": "Lin", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2002, "venue": "CoNLL-2002, Sixth Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, W-H Lin and H-H Chen. 2002. Backward transliteration by learning phonetic similarity. In CoNLL- 2002, Sixth Conference on Natural Language Learning, Taipei, Taiwan.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "Ch", "middle": [], "last": "Manning", "suffix": "" }, { "first": "H", "middle": [], "last": "Schutze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, Ch. and H. Schutze. 1999. Foundations of Statistical Natural Language Processing, MIT Press; 1st edition.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An English-Korean transliteration model using pronunciation and contextual rules", "authors": [ { "first": "J-H", "middle": [], "last": "Oh", "suffix": "" }, { "first": "K-S", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oh, J-H and K-S Choi. 2002. An English-Korean transliteration model using pronunciation and contex- tual rules. In Proceedings of the 19th International Conference on Computational Linguistics (COLING), Taipei, Taiwan.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Sinorama Magazine", "authors": [ { "first": "", "middle": [], "last": "Sinorama", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sinorama. 2002. Sinorama Magazine. http://www.greatman.com.tw/sinorama.htm.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Translating names and technical terms in Arabic text", "authors": [ { "first": "B", "middle": [ "G" ], "last": "Stalls", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the COLING/ACL Workshop on Computational Approaches to Semitic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stalls, B.G. and K. Knight. 1998. Translating names and technical terms in Arabic text. In Proceedings of the COLING/ACL Workshop on Computational Approaches to Semitic Languages.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Automatic extraction of translational Japanese-KATAKANA and English word pairs from bilingual corpora", "authors": [ { "first": "K", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2002, "venue": "International Journal of Computer Processing of Oriental Languages", "volume": "15", "issue": "3", "pages": "261--279", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsujii, K. 2002. Automatic extraction of translational Japanese-KATAKANA and English word pairs from bilingual corpora. International Journal of Computer Processing of Oriental Languages, 15(3):261-279.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "if b = 0, and match(a, b) = D if a > 0 and b > 0. The D-match represents a non-empty TU \u03c9 matching a transliteration character \u03c4, while the V-match represents the English letters omitted in the transliteration process.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "(a, b) = H, if b = 0, match(a, b) = V, if a = 0, match(a, b) = D, if a > 0 and b > 0, P(m i | m i-2 , m i-1 ) is trigram Markov model probabiltiy of match types. \u03b1(i, j ) = P(s 1:i-1 , w 1:j-1 :j-1 | w i-b:i-1 ) P( match(a, b) | \u00b5(i-a, j-b) ). (8) \u00b5(i, j) = (m, match(a*, b*)), where \u00b5(i-a*, j-b*) = (x, m),(9)where (a*, b*) = \u03b1(i-a, j-b) :j-1 | w i-b:i-1 ) P( match(a, b) | \u00b5(i-a, j-b) ).", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "The equations for finding the Viterbi path of matching a proper name and its translation in a sentence The Viterbi alignment path for Example (9c) and the proper name \"Whitehead\" (10c) in the sentence (9e), consisting of one V-match (te-\u03bb), three D-matches (whi\u2212 , hea\u2212 , d\u2212 ), and many H-matches.", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "\u03c4) = Romanization of Chinese character \u03c4 R(\u03c4) = t 1 t 2 \u2026t a \u03c9 = w 1 w 2 \u2026w b c = # of common letters between R(\u03c4) and \u03c9For instance, given w = 'Nayyar' and t = '", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "more details. Based on that observation, we use the linear interpolation of the Good-Turing estimation of TU-to-TU and the class-based initial-to-initial function to approximate the parameters in MTM. Therefore, we have", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "type_str": "table", "html": null, "text": "P MLE (t | n) value relevant toExample (11)", "content": "
English TU \u03c9 Transliteration \u03c4P MLE (\u03c4 | \u03c9)
mi0.00394
mi0.00360
mi0.00034
mi0.00034
mi0.00017
che0.00034
che0.00017
che0.00017
che0.00017
che0.00017
che0.00017
che0
lan0.00394
lan0.00051
lan0.00017
lan0
ge0.00102
ge0.00085
ge0.00068
ge0.00017
ge0.00017
lo0.00342
lo0.00171
lo0.00017
", "num": null }, "TABREF1": { "type_str": "table", "html": null, "text": "The experimental results of word-transliteration alignement", "content": "
Test# of words# of matchesWord precision
Data( # of characters)(# of characters)(Characters)
LODCE20017989.5%
(496)(470)(94.8%)
Sinorama20015175.5%
(512)(457)(89.3%)
Sci. Am.20018090.0%
(602)(580)(96.3%)
", "num": null } } } }