Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I11-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:32:51.749355Z"
},
"title": "A Semantic-Specific Model for Chinese Named Entity Translation",
"authors": [
{
"first": "Yufeng",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "chenyf@nlpr.ia.ac.cn"
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "cqzong@nlpr.ia.ac.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We observe that (1) it is difficult to combine transliteration and meaning translation when transforming named entities (NE); and (2) there are different translation variations in NE translation, due to different semantic information. From this basis, we propose a novel semantic-specific NE translation model, which automatically incorporates the global context from corpus in order to capture substantial semantic information. The presented approach is inspired by example-based translation and realized by log-linear models, integrating monolingual context similarity model, bilingual context similarity model, and mixed language model. The experiments show that the semantic-specific model has substantially and consistently outperformed the baselines and related NE translation systems.",
"pdf_parse": {
"paper_id": "I11-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "We observe that (1) it is difficult to combine transliteration and meaning translation when transforming named entities (NE); and (2) there are different translation variations in NE translation, due to different semantic information. From this basis, we propose a novel semantic-specific NE translation model, which automatically incorporates the global context from corpus in order to capture substantial semantic information. The presented approach is inspired by example-based translation and realized by log-linear models, integrating monolingual context similarity model, bilingual context similarity model, and mixed language model. The experiments show that the semantic-specific model has substantially and consistently outperformed the baselines and related NE translation systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named entity (NE) translation, which transforms a name entity from source language to target language, plays a very important role in translingual language processing tasks, such as machine translation and cross-lingual information retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generally, NE translation 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) The combination of transliteration and meaning translation. Either transliteration or meaning translation is only a subtask of NE translation. There has been less work devoted to includes transliteration and meaning translation. Recently, many researches have been devoted to NE transliteration (most person names) or NE meaning translation (organization names) individually. However, there are still two main challenges in statistical Chinese-English (C2E) NE translation. the combination of transliteration and meaning translation for translating NEs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) The selection of NE translation variations. Segments in different NEs could be translated differently due to NEs' origins and enrich language phenomenon (Huang et al., 2005) . As shown in Table 1 , the same Chinese character \"\u91d1\" is translated into different English variations (highlighted in aligned parts). Table 1\uff0eC2E Translation variations of a character \"\u91d1\" in different instances Furthermore, we randomly extract 100 Chinese characters from the person names of LDC2005T34 corpus, and find out all the characters have more than one translation variations. And each character has about average 7.8 translation variations. Also, (Li et al., 2004) have indicated that there is much confusion in C2E transliteration and Chinese NEs have much lower perplexity than English NEs.",
"cite_spans": [
{
"start": 157,
"end": 177,
"text": "(Huang et al., 2005)",
"ref_id": "BIBREF6"
},
{
"start": 636,
"end": 653,
"text": "(Li et al., 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 192,
"end": 199,
"text": "Table 1",
"ref_id": null
},
{
"start": 313,
"end": 324,
"text": "Table 1\uff0eC2E",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "According to the above two problems, we find that a crucial problem of C2E NE translation is selecting a correct syllable/word at each step, unlike traditional Statistical machine translation (SMT), which mainly focuses on (word, phrase or syntax) alignment and reordering. The selection in NE translation is much related to its semantic information, including NE types, origins, collocations of included Chinese characters, and position-sensitive etc. We want the translation model could automatically learn the semantic information. However, this semantic information for translation is various and difficult to classify.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given an input \"\u5361\u79d1\u592b\u91d1(Kakovkin)\", how to identify the translation of \"\u91d1\"? Only selecting high probable translation across the training set is not reliable in this case. After simply comparing \"\u5361\u79d1\u592b\u91d1\" with the instances in Table 1 , we find that the input is much relevant to \"\u5361\u5217\u4f0a\u91d1 (kin)\", since both of them include \"\u91d1\" at the end position, and their contexts are much related (they share a common Chinese character usage mainly due to the same origin (Russia), such as \"\u5361\", \"\u5217\", and \"\u592b\" etc., according to clues supplied by global context). If we only considers the left/right context of \"\u91d1\", \"\u5361\u79d1\u592b\u91d1\" would have been related to \"\u963f\u5229\u4e9a\u592b\u91d1(din)\" wrongly. From this view, this strongly suggests using a global context as the knowledge base for the final translation decision.",
"cite_spans": [],
"ref_spans": [
{
"start": 220,
"end": 227,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Therefore, we propose a semantic-specific NE translation model, which makes use of those related instances in the training data (defined as global context), to capture semantic information. The main idea is: for each input Chinese NE segment, it is assumed that its correct translation exists somewhere in the instances of the training set. What we need to do is to find out the correct answers based on semantic clues. It is achieved by selecting relevant instances, of which the semantic information is much relevant with the input. In other word, we choose those relevant instances from corpus to imitate translation. Here, semantic information is not directly learned, but is used as a bridge to measure the relevance or similarity between the input and those instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The proposed semantic-specific model has two advantages. Firstly, traditional translation approaches only exploit a general model to transform a source name into the target name with the same rules or distributions. Whereas our model could capture the transformation differences by measuring semantic similarity among different instances (global context). Secondly, we do not need define exact semantic labels for translation, such as various origins or NE types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Formally, given a source (Chinese) name 1 ,..., ,...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "2"
},
{
"text": ", which consists of K Chinese segments, we want to find its target (English) translation 1 ,... ,...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k K C c c c =",
"sec_num": null
},
{
"text": "E e e e = of the highest probability. Here, it is assumed that an NE is literally translated, without insertion or deletion during the transformation. Within a probabilistic framework, a translation system produces the optimum target name, E*, which yields the highest posterior probability given the source Chinese name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k K",
"sec_num": null
},
{
"text": "* arg max ( | ) E E E P E C \u2208\u03a6 = (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k K",
"sec_num": null
},
{
"text": "where E \u03a6 is the set of all possible translations for the Chinese name. In order to incorporate enrich language phenomenon of NEs (i.e. origins or other semantic information that affect NE translation) for capturing more exact translation, ( | ) P E C is rewritten as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k K",
"sec_num": null
},
{
"text": "( | ) ( , | ) max ( , | ) S S P E C P E S C P E S C = \u2245 \u2211 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k K",
"sec_num": null
},
{
"text": "where S is the semantic-specific information for C and E . Inspired by example-based machine translation model (Nagao, 1984; Sato and Nagao, 1990) , we assume that certain mappings in the training set are identical with the transformation of the input NE. Thus we materialize the semantic information as a set of C2E mappings coming from the training set Therefore, the semantic-specific translation model incorporates semantic information by finding out the most likely mappings coming from the training set to capture the semantic structure. If the mappings are known, the translation is achieved. Thus the semantic-specific model is further derived as: sc . Finally, ( ) P E is the probability to connect the target segments as the final translation E .Therefore, in our semantic-specific model, the traditional NE translation problem is transferred as searching the most probable (higher semantic similarity) mappings from the training data and then constructing the final translation.",
"cite_spans": [
{
"start": 111,
"end": 124,
"text": "(Nagao, 1984;",
"ref_id": "BIBREF15"
},
{
"start": 125,
"end": 146,
"text": "Sato and Nagao, 1990)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "k K",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 1 1 1 1 1 1 1 1 1 1 1 1 ( , | ) ( ,[ , ] | ) ( ,",
"eq_num": ", | )"
}
],
"section": "k K",
"sec_num": null
},
{
"text": "( | ) ( , | , ) ( | ) ( | , ) ( | , , ) ( | ) ( | , ) ( ) K K K k k K K K K K K K K K K K P E S C P E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k K",
"sec_num": null
},
{
"text": "In the proposed model (Eq (3)), those features are equally weighted. However, they should be weighted differently according to their contributions. Considering the advantages of the maximum entropy model (Berger et al., 1996) to integrate different kinds of features, we use this framework to model the probability ( , | ) P E S C . Suppose that we have a set of M feature functions ( , , ) , m 1,... .The decision rule is used to choose the most probable target NE (Och and Ney, 2002) :",
"cite_spans": [
{
"start": 204,
"end": 225,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF1"
},
{
"start": 383,
"end": 390,
"text": "( , , )",
"ref_id": null
},
{
"start": 466,
"end": 485,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "k K",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "{ }^1 , ( , ) arg max ( , , ) M m m m E S E S h C E S \u03bb = = \u2211",
"eq_num": "(4)"
}
],
"section": "k K",
"sec_num": null
},
{
"text": "Here, the feature functions 1 ( , , )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k K",
"sec_num": null
},
{
"text": "M h C E S are modeled by the probabilities of 1 ( | ) K P sc C , 1 1 ( | , )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k K",
"sec_num": null
},
{
"text": "K K P se sc C , and ( ) P E respectively. Next, we discuss these three features in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k K",
"sec_num": null
},
{
"text": "The First feature 1 ( | ) K P sc C segments the source into several related segments assumed independence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity Model",
"sec_num": "3.1"
},
{
"text": "1 1 1 1 ( , , ) ( | ) ( | ) K K K k k k h C E S P sc c P sc c = = \u2248 \u220f (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity Model",
"sec_num": "3.1"
},
{
"text": "The probability ( | ) k k P sc c describes the relationship of k sc and the source NE segment k c . Since k sc and k c are on the same language side, ( | ) k k P sc c can be commonly measured by the frequency of k sc . However, this measurement usually produces short and high frequent segments, which is not really suitable for NE translation with multiple variations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity Model",
"sec_num": "3.1"
},
{
"text": "To better estimate the distribution ( | ) k k P sc c , this paper proposes a much more generic model called monolingual similarity model, which captures phonetic characteristics and corpus statistics, and also removes the bias of choosing shorter segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( | ) ( , ) ( ) ( ) log(| | 1) k k l k k k k k P sc c sim sc c tf sc idf sc sc \u2245 \u00d7 \u00d7 \u00d7 +",
"eq_num": "(6)"
}
],
"section": "Monolingual Similarity Model",
"sec_num": "3.1"
},
{
"text": "Here we first adopt a local similarity function ( , ) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\uf8f4 = \uf8f2 \u00d7 \uf8f4 \uf8f3 \u2211",
"eq_num": "(7)"
}
],
"section": "Monolingual Similarity Model",
"sec_num": "3.1"
},
{
"text": "If all the characters of the two segments are identical ( k k sc c = ), their similarity is assigned as a high score 1.0. However, many phonetically similar segments are usually translated into a same syllable, such as \"\u80af\" and \"\u574e\" could align to a same syllable \"cam\". So we use NE alignment result to evaluate the phonetic similarity of two segments by 1 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity Model",
"sec_num": "3.1"
},
{
"text": "( | ) ( | ) I i k i k i P e sc P e c I = \u00d7 \u2211 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity Model",
"sec_num": "3.1"
},
{
"text": "where i e denotes the same syllables they aligned in the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity Model",
"sec_num": "3.1"
},
{
"text": "On the other hand, a global concept, which is borrowed from tf\u00d7idf scheme in information retrieval (Chen et al., 2003) , is used in Eq (7). Therefore, Eq (7) prefers Chinese segments that occur frequently, but rarely have different English transformations. Besides, since a longer segment has less disambiguation of its translation variations, we also favor longer Chinese segments, so that the length of a Chinese segment, i.e., | | k sc , is also considered.",
"cite_spans": [
{
"start": 99,
"end": 118,
"text": "(Chen et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Similarity Model",
"sec_num": "3.1"
},
{
"text": "The second feature is formulated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Similarity Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "2 1 1 1 1 ( , , ) ( | , ) ( | , ) K K K K k k k k h C E S P se sc c P se sc c = = \u2248 \u220f",
"eq_num": "(8)"
}
],
"section": "Bilingual Similarity Model",
"sec_num": "3.2"
},
{
"text": "The probability ( | , ) k k k P se sc c identifies the target segment k se , of which the semantic information is consistent with the input k c . This distribution estimates the bilingual similarity of k se and k c , thus is formulated as follows: sim sc c , which only measures the literal similarity based on characters or syllables as shown in Eq (7). Because it is difficult to measure the semantic similarity of two segments directly, we quantify their similarity in terms of their specific contexts. The context of k c is the input NE C , while the context of k sc is an instance SC that includes k sc in the training set. For example: given an input NE \"\u65e5\u672c\u677e\u5c71\u82ad\u857e\u821e\u56e2\" that acts as a context, we want to find the translation of a segment \"\u677e\", the segment \"\u677e\" in the training data have different global contexts, such as \"\u65af\u6587\u677e (Svensson)\", \"\u4e9a\u677e\u68ee (Asuncion)\", and \"\u8d64\u677e \u5e7f\u9686 (Akamatsu Hirotaka)\" and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Similarity Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( | , )",
"eq_num": "( , )"
}
],
"section": "Bilingual Similarity Model",
"sec_num": "3.2"
},
{
"text": "To address this problem, we adopt a vector space model that describes the context of k c and k sc . Some notions are defined here. A term set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Similarity Model",
"sec_num": "3.2"
},
{
"text": "1 1 { ,..., , ,..., } n n T t t t t \u2212 \u2212 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Similarity Model",
"sec_num": "3.2"
},
{
"text": "is an orderly character set of the context of k c , where [ , ] n n \u2212 is a Character-based n-range context window for k c . This term set not only represents the character set of the context, but also presents the position information of the context. The similar action is applied to SC (the context of k sc ). Therefore, the context of k c (the input Chinese NE) and each instance that includes k sc would be transformed into vectors. For example, given a segment \"\u677e\" in the input NE \"\u65e5\u672c\u677e\u5c71\u82ad\u857e\u821e\u56e2\", its term vector is {/s, \u65e5\uff0c\u672c\uff0c\u5c71\uff0c\u82ad\uff0c\u857e} when 3 n = , \"/s\" denotes the start position. While \"\u677e\" in the instance \"\u8d64\u677e\u5e7f\u9686\", its vector is {/, /s, \u8d64, \u5e7f, \u9686, /e}, where \"/\" denotes a valid character and \"/e\" represents the end position.",
"cite_spans": [
{
"start": 58,
"end": 63,
"text": "[ , ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Similarity Model",
"sec_num": "3.2"
},
{
"text": "We don't use Boolean weighting or tf/idf conceptions as traditional information retrieval (IR) to calculate the terms' weight, due to the sparse data problem. The mutual information is adopted to calculate the weight of t , which expresses the relevance between the context of k c and the con-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Similarity Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "text of k sc . ( , ) ( , ) log ( ) ( ) C SC weigh C SC C SC p t t t MI t t p t p t = = \u00d7",
"eq_num": "(10)"
}
],
"section": "Bilingual Similarity Model",
"sec_num": "3.2"
},
{
"text": "After transferring the contexts into general vectors, the similarity of two vectors is measured by computing the cosine value of the angle between them. This measure, called cosinesimilarity measure, has been widely used in information retrieval tasks (Baeza-Yates and Ribeiro-Neto, 1999), and is thus utilized here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Similarity Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C SC s k k C SC V V sim sc c V V = \u00d7 \uf067",
"eq_num": "(11)"
}
],
"section": "( , )",
"sec_num": null
},
{
"text": "The numerator is the inner product of two vectors. The denominator is product of the length of C V and the length of SC V . If an instance SC (including the segment k sc ) is much related to the input NE C (including the segment k c ), this case suggests that the semantic similarity between k c and k sc is much high. In other words, the two probably have the same translation k se . Here k sc acts as a bridge to realize the transformation from k c to k se .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "( , )",
"sec_num": null
},
{
"text": "The probability ( ) P E in Eq (3) encodes the popularity distribution of an English NE E , i.e. English language model. As mentioned above, there are two transformation styles for NEs: transliteration and meaning translation. Hence the glue rules for the final result are different. Transliteration is syllable-connecting without space on the English side, such as \"Matsu (\u677e)\" and \"yama (\u5c71)\" are connected as \"Matsuyama (\u677e\u5c71)\", its language model can be defined as a syllable-based n-gram model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixed Language Model",
"sec_num": "3.3"
},
{
"text": ", 1 , , 1 1 1 ( ) ( | ) K J k j LM tl k j k j n k j P E P e e \u2212 \u2212 + = = = \u220f \u220f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixed Language Model",
"sec_num": "3.3"
},
{
"text": "(suppose there are j letters in the k segment). In contrast, the output of meaning translation is chained word by word with spaces, for example, \"Wuyi (\u6b66\u5937)\" and \"Mountain (\u5c71)\" are connected as \"Wuyi Mountain\", of which the language model is presented as a general word-based n-gram ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixed Language Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "model 1 1 1 ( )",
"eq_num": "( | )"
}
],
"section": "Mixed Language Model",
"sec_num": "3.3"
},
{
"text": "Without Chinese word segmentation, we have to calculate every possible mapping to determine the most probable one in a large corpus, which will make the search space significantly huge. Therefore, we only measure those instances that including at least one character of the input NE. And the candidates, of which the feature values are below a threshold, are discarded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Search",
"sec_num": "4"
},
{
"text": "The weighting coefficients for the three features in Eq (3) can be learned from the development set via Maximum Entropy (ME) training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ME Parameter Training",
"sec_num": "4.1"
},
{
"text": "One way to get the associated weighting coefficients for those log-probability-factors adopted in the model is to regard each of them as realvalued features, and then use ME framework to find their corresponding lambda values, which are just the weighting coefficients that we look for. Following (Och et al. 2002; Liu et al. 2005) , we use the GIS (Generalized Iterative Scaling) algorithm (Darroch and Ratcliff, 1972) to train the model parameters 1 ,... M \u03bb \u03bb of the log-linear models according to Eq (4). In practice, YAS-",
"cite_spans": [
{
"start": 287,
"end": 314,
"text": "Following (Och et al. 2002;",
"ref_id": null
},
{
"start": 315,
"end": 331,
"text": "Liu et al. 2005)",
"ref_id": "BIBREF14"
},
{
"start": 391,
"end": 419,
"text": "(Darroch and Ratcliff, 1972)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ME Parameter Training",
"sec_num": "4.1"
},
{
"text": "MET 3 1 ,... M \u03bb \u03bb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ME Parameter Training",
"sec_num": "4.1"
},
{
"text": "package is adopted here to train the model parameters . In our case, 3 M = .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ME Parameter Training",
"sec_num": "4.1"
},
{
"text": "We use a greedy search algorithm to search the translation with highest probability in the space of all possible mappings. A state in this space is a partial mapping. A transition is defined as the addition of a single mapping to the current state. Our start state is the empty translation result, where there is no selected mapping. A terminal state is a state in which no more mappings can be added to increase the probability of the current alignment. Our task is to find the terminal state with the highest probability. We can compute gain, a heuristic function, to figure out a probability when adding a new mapping, which is defined as follows: where k S s \uf055 means a single mapping k s is added to S . Since we have assumed that NE is literally translated in our model, there is a restriction: no overlap is allowed between the mapping k s and the mapping set S .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search",
"sec_num": "4.2"
},
{
"text": "The greedy search algorithm for general loglinear models is formally described as follows: The above search algorithm generates the final translation result by adding one mapping for each time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search",
"sec_num": "4.2"
},
{
"text": "Input: C and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search",
"sec_num": "4.2"
},
{
"text": "The training-set, testing-set, and development-set all come from Chinese-English Named Entity List v1.0 (LDC2005T34). The training-set consists of 218,172 proofread bilingual entries: 73,052 person name pairs, 76,460 location name pairs and 68,660 organization name pairs. Besides, 300 person names, 300 organization names, and 300 names of various NE types (including person names, location names and organization names) are used as three testing-sets respectively. Development-set includes 500 randomly selected name pairs of various NE types. There is no overlap between the training set, the development set and the open test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Note that in the training set, the included transliterated parts and the meaning translated parts, which have been manually labeled, are trained separately. 218,172 NE pairs are split into 185,339 transliterated pairs (TL-training set) and 62,453 meaning translated pairs (TS-training set) (since transliteration and meaning translation would occur in one NE pair, so 185,339+62.453>218,172).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In the TL-training set, the Chinese name of an NE pair is transformed into a character-based sequence and its aligned English name is split into syllables, of which the split rules are described in (Jiang et al., 2007) . Afterwards, GI-ZA++ 4",
"cite_spans": [
{
"start": 198,
"end": 218,
"text": "(Jiang et al., 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "First, we will show the experimental results when setting different parameters for the semantic similarity model, which is done on the development set with equal feature weightings. We set tool is invoked to align characters to syllables. On the other hand, for TS-training set, the Chinese part of an NE is also treated as a character-based sequence, while the English part is regarded as a word-based sequence. The alignment between Chinese characters and English words are achieved by GIZA++ toolkit as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We use the recall of top-N hypotheses (Yang et al, 2008) as the evaluation metrics, and also adopt the Mean Reciprocal Rank (MRR) metric (Kantor and Voorhees, 2000) , a measure that is commonly used in information retrieval, assuming there is precisely one correct answer. Each NE translation generates at most top-50 hypotheses for each input when computing MRR. different ranges of the context window (the parameter n ) to find which range could get the best performance. Figure 1 illustrates the effect of the range parameter n for the final translation result (by MRR metric). From Figure 1 , we could find that when n=3, the proposed model gets the best performance (MRR value=0.498) . Therefore, n=3 is chosen for further study. Because the proposed three features cannot be used separately, we do not compare their individual effectiveness. Those normalized weighting coefficients (i.e., normalized lambda-values) obtained from YASMET package is 0.248, 0.565 and 0.187 (we all use 3-gram in the mixed language model). It is not surprising to find that 2 \u03bb (corresponding to the bilingual similarity feature) receives the highest value. This clearly indicates that the bilingual similarity model plays a critical role in our semantic-specific translation model.",
"cite_spans": [
{
"start": 38,
"end": 56,
"text": "(Yang et al, 2008)",
"ref_id": "BIBREF21"
},
{
"start": 137,
"end": 164,
"text": "(Kantor and Voorhees, 2000)",
"ref_id": "BIBREF9"
},
{
"start": 671,
"end": 688,
"text": "(MRR value=0.498)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 474,
"end": 482,
"text": "Figure 1",
"ref_id": "FIGREF4"
},
{
"start": 586,
"end": 594,
"text": "Figure 1",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We adopt a traditional statistical translation model (a phrase-based machine translation model, Moses 5 Setting decoder) to process transliteration, meaning translation, and their combination as three baselines respectively. All of the baselines generate Top-50 candidates for each input. Table 2 . The experiment configurations of baselines Note that baseline III combines transliteration and meaning translation only by training TL training set and TS training set individually, and then directly integrating generated syllable-based alignment and word-based alignment into a whole translation table.",
"cite_spans": [],
"ref_spans": [
{
"start": 289,
"end": 296,
"text": "Table 2",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Semantic-Specific Model Vs. Baselines",
"sec_num": "5.1"
},
{
"text": "Firstly, Table 3 compares the semanticspecific model (SS-model) with three baselines for the translation of person names. From Table 3 , we find that the proposed model raises the recall of top-50 6.2% over Baseline I. It proves that our proposed model is effective for the transliteration of person names, and outperforms the traditional transliteration model. Baseline II can not output result due to its used TS-training set is out of the range of transliterating. It is interesting that the performance of baseline III even deteriorates after combing TS and TL training sets. One explanation might be that the language model of baseline III is only trained on word level, so that there is a severe data sparse problem. Table 4 . Semantic-specific model vs. baselines for organization names' translation Secondly, the comparison between SS-model and three baselines for translating organization names are shown in Table 4 . Baseline III outperforms baseline II for combining both TL-training set and TS-training set. Also SS-model has substantially raised the Top-N recall and MRR value over the baselines. Intuitively, we might expect that SS model could play a greater advantage on translating organization names, because organization names usually combine transliteration and meaning translation. However, comparing Table 3 with Table 4 , the performance gaps between SS-model and baselines for organization names is smaller than that for person names. After checking those errors, this phenomenon is probably due to the word reordering problem, which usually occurs in the translation of organization names, but has not been considered by SS-model. Further study would be required for this problem.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 3",
"ref_id": "TABREF8"
},
{
"start": 127,
"end": 135,
"text": "Table 3",
"ref_id": "TABREF8"
},
{
"start": 724,
"end": 731,
"text": "Table 4",
"ref_id": null
},
{
"start": 918,
"end": 925,
"text": "Table 4",
"ref_id": null
},
{
"start": 1323,
"end": 1344,
"text": "Table 3 with Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Semantic-Specific Model Vs. Baselines",
"sec_num": "5.1"
},
{
"text": "Thirdly, we measure the overall effect of SSmodel in Table 5 . Evidently, the proposed SSmodel yields significantly better results than the three baselines at all aspects. It is not surprising to find that the proposed SS-model is effective in translating various NEs of different NE types. Table 5 . Semantic-specific model vs. baselines for various names' translation",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 5",
"ref_id": null
},
{
"start": 291,
"end": 298,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic-Specific Model Vs. Baselines",
"sec_num": "5.1"
},
{
"text": "Actually, the proposed semantic-specific model captures semantic information by incorporating the global context information in the corpus, which is similar to the joint transliteration model proposed by (Li et al., 2004) . However, the joint model only utilized the local context of the input (joint n-gram model of transliteration pairs) Table 6 gives the comparison of the joint model and SS-model for person names' transliteration. Here previous used training-set I and 300 person names are adopted for training and testing here. Also we use 3-gram in both of the two models. As shown in Table 6 , even though the performance gap of Top1 (+0.8%) is not much obvious, the performance gap gets larger when the top-N hypotheses increase. This evidently proves the superiority of the proposed model on selecting the correct translation variation from global context. Table 6 . Semantic-specific model vs. joint model for person names' translation",
"cite_spans": [
{
"start": 204,
"end": 221,
"text": "(Li et al., 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 340,
"end": 347,
"text": "Table 6",
"ref_id": null
},
{
"start": 592,
"end": 599,
"text": "Table 6",
"ref_id": null
},
{
"start": 867,
"end": 874,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic-Specific Model Vs. Joint Transliteration Model",
"sec_num": "5.2"
},
{
"text": "To further validate the capability of our proposed model, we measure its sensitivity to NE origin information. Thus we compare it with a wellknown semantic transliteration model (Li et al., 2007) , which only deals with transliteration. Li , and then uses its corresponding trained model, which is trained on instances all from origin O . The training and decoding process also use the Moses decoder.",
"cite_spans": [
{
"start": 178,
"end": 195,
"text": "(Li et al., 2007)",
"ref_id": "BIBREF13"
},
{
"start": 237,
"end": 239,
"text": "Li",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic-Specific Model Vs. Origin-Based Model",
"sec_num": "5.3"
},
{
"text": "In this experiment, we adopt training-set II, which includes 7,021 person names from USA, Japan and Korea (International whoswho corpus in LDC2005T34). And then we randomly select 100 person names from USA, Japan and Korea respectively (also in whoswho corpus) as our test data. Also, there is no overlap between the training set II and those test data. Here, baseline I is also the transliteration model, but trained on training set II, and we use the MRR criterion as well. Table 7 . Semantic-specific model vs. originbased model for person names' translation Considering Table 7 , though there is a slight drop comparing our model with origin-based model for the Japanese person names, the translation improvements on the person names of the other two origins show the superiority of our semantic-specific translation model. Actually, there would be much more origins to classify. For instance, there are more than 100 origins in whoswho data; it is tedious to train a large number of models in practice. And the origin labeled data for person names is hard to acquire. By using semantic-specific model, we could directly cluster instances of similar origin, and generate final translation result for origin consistency. The experiments prove that the SS-model is effective on capturing NE origin information to assist NE translation, and it could further accommodate more different semantic information.",
"cite_spans": [],
"ref_spans": [
{
"start": 476,
"end": 483,
"text": "Table 7",
"ref_id": null
},
{
"start": 574,
"end": 581,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic-Specific Model Vs. Origin-Based Model",
"sec_num": "5.3"
},
{
"text": "There are two strategies for NE translation. One is to extract NE translation pairs from the Web or from parallel/comparable corpora. This is essentially the same as constructing NE-pair dictionary (lee et al., 2006; Jiang et al., 2009) , which is usually not a real-time translation model and is limited by the coverage of the used corpus and the Web resource.",
"cite_spans": [
{
"start": 198,
"end": 216,
"text": "(lee et al., 2006;",
"ref_id": "BIBREF11"
},
{
"start": 217,
"end": 236,
"text": "Jiang et al., 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "The other is to directly translate an NE phonetically or according to its meaning. For transliteration, several transliteration approaches have been applied to various language pairs (Knight and Graehl, 1998; Tsuji 2002; Li et al. 2004; Oh and Choi, 2005; Pervouchine et al., 2009; Durrani et al., 2010) . In contrast, for NE meaning translation, (Zhang et al., 2005; Chen and Zong, 2008; have proposed different statistical translation models only for organization names.",
"cite_spans": [
{
"start": 183,
"end": 208,
"text": "(Knight and Graehl, 1998;",
"ref_id": null
},
{
"start": 209,
"end": 220,
"text": "Tsuji 2002;",
"ref_id": "BIBREF20"
},
{
"start": 221,
"end": 236,
"text": "Li et al. 2004;",
"ref_id": "BIBREF12"
},
{
"start": 237,
"end": 255,
"text": "Oh and Choi, 2005;",
"ref_id": "BIBREF17"
},
{
"start": 256,
"end": 281,
"text": "Pervouchine et al., 2009;",
"ref_id": "BIBREF18"
},
{
"start": 282,
"end": 303,
"text": "Durrani et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 347,
"end": 367,
"text": "(Zhang et al., 2005;",
"ref_id": "BIBREF23"
},
{
"start": 368,
"end": 388,
"text": "Chen and Zong, 2008;",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "So far, semantic transliteration has been proposed for learning language origin and gender information of person names (Li et al., 2007) . However, semantic information is various for NE translation. It is complicated to define different semantic types, and is tedious to train a large number of models used for different semantic information. Moreover, a semantically labeled training corpus is hard to acquire. Hence this paper does not directly learn NE semantic information, but measures the semantic similarity between the input and global context to capture exact NE translation.",
"cite_spans": [
{
"start": 119,
"end": 136,
"text": "(Li et al., 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this paper, we present a novel semanticspecific model which could adaptively learn semantic information via instance-based similarity measurement from global context. Accordingly, this model combines transliteration and meaning translation, and automatically selects most probable translation candidates on the basis of the NE semantic-specific information. In summary, our experiments show that the semantic-specific model is much more effective than the traditional statistical model for named entity translation, which achieves a remarkable 31.6% relative improvement in MRR (Table 5) . Furthermore, the proposed model yields a comparable result with the joint transliteration model (also using context) and the origin-based model, which shows its advantage on capturing semantic information from global context, such as origin information.",
"cite_spans": [],
"ref_spans": [
{
"start": 581,
"end": 590,
"text": "(Table 5)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "It is expected that the proposed semanticspecific translation model could be further applied to other language pairs, as no language dependent linguistic feature (or knowledge) is adopted in the model/algorithm used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "NE translation referred to in this paper denotes bilingual NE transformation (either transliteration or meaning translation), and meaning translation is proposed as distinct from transliteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The source side of one mapping could be a character, a word or several words. The target side of one mapping could be several syllables or words. Therefore one mapping is defined as a segment pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.fjoch.com/YASMET.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.fjoch.com/GIZA++.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.statmt.org/moses/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research work has been funded by the Natural Science Foundation of China under Grant No. 6097 5053 and 61003160 and also supported by the External Cooperation Program of the Chinese Academy of Sciences. The authors also extend sincere thanks to Prof. Keh-Yih Su for his keen in-sights and suggestions on our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Modern Information Retrieval",
"authors": [
{
"first": "R",
"middle": [],
"last": "Baeza-Yates",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ribeiro-Neto",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Baeza-Yates and B. RiBeiro-Neto. 1999. Modern Information Retrieval. ISBN 0-201-39829-X.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Maximum Entropy Approach to Natural Language Processing",
"authors": [
{
"first": "Adam",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J Della"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam L. Berger, Stephen A. Della Pietra and Vincent J. Della Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Com- putational Linguistics, 22(1):39-72, March.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning Formulation and Transformation Rules for Multilingual Named Entities",
"authors": [
{
"first": "Hsin-His",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Changhua",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL 2003 Workshop on Multilingual and Mixedlanguage Named Entity Recognition",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hsin-His Chen, Changhua Yang and Ying Lin. 2003. Learning Formulation and Transformation Rules for Multilingual Named Entities. In Proceedings of ACL 2003 Workshop on Multilingual and Mixed- language Named Entity Recognition, pages 1-8.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Structurebased Model for Chinese Organization Name Translation",
"authors": [
{
"first": "Yufeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2008,
"venue": "ACM Transactions on Asian Language Information Processing",
"volume": "7",
"issue": "1",
"pages": "1--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yufeng Chen, Chengqing Zong. 2008. A Structure- based Model for Chinese Organization Name Translation. ACM Transactions on Asian Language Information Processing, 7(1): 1-30, February 2008.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generalized iterative scaling for log-linear models",
"authors": [
{
"first": "J",
"middle": [
"N"
],
"last": "Darroch",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ratcliff",
"suffix": ""
}
],
"year": 1972,
"venue": "Annuals of Mathematical Statistics",
"volume": "43",
"issue": "",
"pages": "1470--1480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. N. Darroch and D. Ratcliff. 1972. Generalized itera- tive scaling for log-linear models. Annuals of Ma- thematical Statistics, 43: 1470-1480.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Hindi-to-Urdu Machine Translation Through Transliteration",
"authors": [
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "465--474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nadir Durrani, Hassan Sajjad, Alexander Fraser, and Helmut Schmid. 2010. Hindi-to-Urdu Machine Translation Through Transliteration. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 465-474.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Cluster-Specific Name Transliteration",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the HLT-EMNLP 2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Huang. 2005. Cluster-Specific Name Translitera- tion. In Proceedings of the HLT-EMNLP 2005, Vancouver, BC, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Named Entity Translation with Web Mining and Transliteration",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Lee-Feng",
"middle": [],
"last": "Chien",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of IJ-CAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Jiang, Ming Zhou, Lee-Feng Chien, and Cheng Niu. 2007. Named Entity Translation with Web Mining and Transliteration. In Proceedings of IJ- CAI-2007.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mining Bilingual Data from the Web with Adaptively Learnt Patterns",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Shiquan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qingsheng",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ACL-2009 and the 4th IJCNLP of the AFNLP",
"volume": "",
"issue": "",
"pages": "870--878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Jiang, Shiquan Yang, Ming Zhou, Xiaohua Liu, and Qingsheng Zhu. 2009. Mining Bilingual Data from the Web with Adaptively Learnt Patterns. In Proc. of ACL-2009 and the 4th IJCNLP of the AFNLP, pages 870-878.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The TREC-5 Confusion Track: Comparing Retrieval Methods for Scanned Text. Informational Retrieval",
"authors": [
{
"first": "B",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Ellen",
"middle": [
"M"
],
"last": "Kantor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Voorhees",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "2",
"issue": "",
"pages": "165--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul B. Kantor and Ellen M. Voorhees, 2000, The TREC-5 Confusion Track: Comparing Retrieval Methods for Scanned Text. Informational Retrieval, 2, pp. 165-176.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Alignment of Bilingual Named Entities in Parallel Corpora Using Statistical Models and Multiple Knowledge Sources",
"authors": [
{
"first": "Chun-Jen",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Jyh-Shing",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jang",
"suffix": ""
}
],
"year": 2006,
"venue": "ACM Transactions on Asian Language Information Processing (TALIP)",
"volume": "5",
"issue": "2",
"pages": "121--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chun-Jen Lee, Jason S. Chang and Jyh-Shing R. Jang. 2006. Alignment of Bilingual Named Entities in Parallel Corpora Using Statistical Models and Mul- tiple Knowledge Sources. ACM Transactions on Asian Language Information Processing (TALIP), 5(2): 121-145.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Joint Source Channel Model for Machine Transliteraltion",
"authors": [
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of 42nd ACL",
"volume": "",
"issue": "",
"pages": "159--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haizhou Li, Min Zhang and Jian Su. 2004. A Joint Source Channel Model for Machine Transliteral- tion. In Proceedings of 42nd ACL, pages 159-166.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Semantic Transliteration of Personal Names",
"authors": [
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Khe Chai",
"middle": [],
"last": "Sim",
"suffix": ""
},
{
"first": "Jin-Shea",
"middle": [],
"last": "Kuo",
"suffix": ""
},
{
"first": "Minghui",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of 45th ACL",
"volume": "",
"issue": "",
"pages": "120--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haizhou Li, Khe Chai Sim, Jin-shea Kuo, and Ming- hui Dong. 2007. Semantic Transliteration of Per- sonal Names, In Proceedings of 45th ACL, pages 120-127.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Log-linear Models for Word Alignment",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual meeting of the ACL",
"volume": "",
"issue": "",
"pages": "459--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Qun Liu and Shouxun Lin. Log-linear Models for Word Alignment. 2005. In Proceedings of the 43rd Annual meeting of the ACL, pages 459- 466.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Framework of a Mechanical Translation between Japanese and English by Analogy Principle",
"authors": [
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1984,
"venue": "Artificial and Human Intelligence",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Nagao. 1984. A Framework of a Mechanical Translation between Japanese and English by Analogy Principle, In Artificial and Human Intelli- gence, pages 173-180. NATO publications.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Discriminative Training and Maximum Entropy Models for Statistical Machine Translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2002. Discri- minative Training and Maximum Entropy Models for Statistical Machine Translation. In Proceedings of the 40th Annual Meeting of the ACL, pages 295- 302.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "An ensemble of grapheme and phoneme for machine transliteration",
"authors": [
{
"first": "J.-H",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "K.-S",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "450--461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.-H. Oh and Choi, K.-S. 2005. An ensemble of gra- pheme and phoneme for machine transliteration. In Proceedings of IJCNLP, pages 450-461.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Transliteration Alignment",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Pervouchine",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-09",
"volume": "",
"issue": "",
"pages": "136--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Pervouchine, Haizhou Li and Bo Lin. 2009. Transliteration Alignment. In Proceedings of ACL- 09, pages 136-144.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Toward Memory-Based Translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of COLING 1990",
"volume": "3",
"issue": "",
"pages": "247--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Sato and M. Nagao. 1990. Toward Memory-Based Translation. In Proceedings of COLING 1990, Vol.3. pages 247-252.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Automatic extraction of translational Japanese-KATAKANA and English word pairs from bilingual corpora",
"authors": [
{
"first": "K",
"middle": [],
"last": "Tsuji",
"suffix": ""
}
],
"year": 2002,
"venue": "Int. J. Comput. Process Oriental Lang",
"volume": "15",
"issue": "3",
"pages": "261--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Tsuji. 2002. Automatic extraction of translational Japanese-KATAKANA and English word pairs from bilingual corpora. Int. J. Comput. Process Oriental Lang. 15(3): 261-279.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Chinese-English Backward Transliteration Assisted with Mining Monolingual Web Pages",
"authors": [
{
"first": "Fan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Feifan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceeding of the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "541--549",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fan Yang, Jun Zhao, Bo Zou, Kang Liu, Feifan Liu. 2008. Chinese-English Backward Transliteration Assisted with Mining Monolingual Web Pages, In Proceeding of the 46th Annual Meeting of the As- sociation for Computational Linguistics, pages 541-549, Columbus, OH.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Chinese-English Organization Name Translation System Using Heuristic Web Mining and Asymmetric Alignment",
"authors": [
{
"first": "Fan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 47th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fan Yang, Jun Zhao, Kang Liu. 2009. A Chinese- English Organization Name Translation System Using Heuristic Web Mining and Asymmetric Alignment. In Proceedings of the 47th Annual Meeting of the ACL, Singapore. August 2 -7.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A Phrase-Based Context-Dependent Joint Probability Model for Named Entity Translation",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Hendra",
"middle": [],
"last": "Setiawan",
"suffix": ""
}
],
"year": 2005,
"venue": "IJCNLP",
"volume": "",
"issue": "",
"pages": "600--611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Zhang, Haizhou Li, Jian Su, and Hendra Setia- wan. 2005. A Phrase-Based Context-Dependent Joint Probability Model for Named Entity Transla- tion. IJCNLP 2005, pages 600-611.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Transliteration variations\u91d1\u70b3\u534e --Jin Binghua \u91d1\u6210\u52cb --Kim Sung-Hoon \u4f55\u585e \u534e\u91d1 \u5e03\u4f26\u7eb3 --Jose Joaquin Brunner \u82e5\u963f\u91d1\u2022\u5e0c\u6f58\u5fb7 --Joaquim Chipande \u9a6c\u4e01\u8def\u5fb7\u91d1 --Martin Luther King \u91d1\u4e38\u4fe1 --Kanemaru Shin \u7c73\u65af\u91d1 --Miskine \u9ea6\u91d1\u6258\u4ec0 --Aaron Mcintosh \u6587\u68ee\u7279\u2022\u4f2f\u91d1 --Vincent Burgen \u57c3\u5c14\u91d1\u2022\u6770\u62c9\u8f9b --Ergin Celasin \u963f\u5229\u4e9a\u592b\u91d1 --Alyavdin \u5361\u5217\u4f0a\u91d1 --Kaleikin \u2026\u2026 Meaning translation variations \u963f\u65af\u7279\u57fa\u91d1 --Astor Fund \u5317\u4eac\u51b6\u91d1\u5b66\u9662 --Beijing Institute of Metallurgy \u2026\u2026"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "to measure the relationship of the input Chinese segment k c and a possible Chinese segment k sc . It is measured on literal level (shallow level based on Chinese character and phonetic similarity"
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Effects of different context ranges (n) on translation results (by MRR metric)"
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "the similarity of the global context amongst corpus."
},
"TABREF6": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"5\">shows their different settings comparing the</td></tr><tr><td colspan=\"4\">proposed semantic-specific (SS) model.</td><td/></tr><tr><td/><td colspan=\"4\">SS-model Baseline I Baseline II Baseline III</td></tr><tr><td>Input</td><td>Un-segmented</td><td>Character-based</td><td>Word-based</td><td>Character-based</td></tr><tr><td>Training data</td><td>TL-training set + TS-training set</td><td>TL-training set</td><td>TS-training set</td><td>TL-training set + TS-training set</td></tr><tr><td/><td>Mix of</td><td/><td/><td/></tr><tr><td>Language</td><td>syllable-</td><td>Syllable-</td><td>Word-</td><td>Word-</td></tr><tr><td>model</td><td>based and</td><td>based</td><td>based</td><td>based</td></tr><tr><td/><td>word-based</td><td/><td/><td/></tr></table>",
"html": null,
"text": ""
},
"TABREF8": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td colspan=\"4\">. Semantic-specific model vs. baselines</td></tr><tr><td/><td colspan=\"3\">for person names' translation</td><td/></tr><tr><td colspan=\"5\">Metric SS-model Baseline I Baseline II Baseline III</td></tr><tr><td>Top1</td><td>34.4%</td><td>0%</td><td>26.5%</td><td>30.8%</td></tr><tr><td colspan=\"2\">Top10 38.7%</td><td>0%</td><td>29.8%</td><td>36.4%</td></tr><tr><td colspan=\"2\">Top50 46.9%</td><td>0%</td><td>35.2%</td><td>40.2%</td></tr><tr><td>MRR</td><td>0.381</td><td/><td>0.297</td><td>0.336</td></tr></table>",
"html": null,
"text": ""
}
}
}
}