{ "paper_id": "O13-2002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:03:31.065473Z" }, "title": "Learning to Find Translations and Transliterations on the Web based on Conditional Random Fields", "authors": [ { "first": "Joseph", "middle": [ "Z" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "jason.jschang@gmail.com" }, { "first": "Jyh-Shing", "middle": [ "Roger" ], "last": "Jang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University", "location": { "country": "Taiwan" } }, "email": "jang@csie.ntu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In recent years, state-of-the-art cross-linguistic systems have been based on parallel corpora. Nevertheless, it is difficult at times to find translations of a certain technical term or named entity even with a very large parallel corpora. In this paper, we present a new method for learning to find translations on the Web for a given term. In our approach, we use a small set of terms and translations to obtain mixed-code snippets returned by a search engine. We then automatically annotate the data with translation tags, automatically generate features to augment the tagged data, and automatically train a conditional random fields model for identifying translations. At runtime, we obtain mixed-code webpages containing the given term and run the model to extract translations as output. Preliminary experiments and evaluation results show our method cleanly combines various features, resulting in a system that outperforms previous works.", "pdf_parse": { "paper_id": "O13-2002", "_pdf_hash": "", "abstract": [ { "text": "In recent years, state-of-the-art cross-linguistic systems have been based on parallel corpora. Nevertheless, it is difficult at times to find translations of a certain technical term or named entity even with a very large parallel corpora. In this paper, we present a new method for learning to find translations on the Web for a given term. In our approach, we use a small set of terms and translations to obtain mixed-code snippets returned by a search engine. We then automatically annotate the data with translation tags, automatically generate features to augment the tagged data, and automatically train a conditional random fields model for identifying translations. At runtime, we obtain mixed-code webpages containing the given term and run the model to extract translations as output. Preliminary experiments and evaluation results show our method cleanly combines various features, resulting in a system that outperforms previous works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The phrase translation problem is critical to many cross-language tasks, including statistical machine translation, cross-lingual information retrieval, and multilingual terminology (Bian & Chen, 2000; Kupiec, 1993) . Such systems typically use a bilingual lexicon or a parallel corpus to obtain phrase translations. Nevertheless, the out of vocabulary problem (OOV) is difficult to overcome, even with a very large training corpus, due to the Zipf nature of word For a given English term, such translations can be extracted by classifying the Chinese characters in the snippets as either translation or otherwise. Intuitively, we can cast the problem as a sequence labeling task. To be effective, we need to associate each token (i.e., Chinese character or word) with some features to characterize the likelihood of the token being part of the translation. For example, by exploiting some external knowledge sources (e.g., bilingual dictionaries), we derive that the Chinese character \"\u8fa8\" (bian) in the Chinese word \"\u8fa8\uf9fc\" (bian-shi, recognition) is likely to be part of the translation of \"named entity recognition.\"", "cite_spans": [ { "start": 182, "end": 201, "text": "(Bian & Chen, 2000;", "ref_id": "BIBREF0" }, { "start": 202, "end": 215, "text": "Kupiec, 1993)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we present a new method that automatically obtains such labeled data and generates features for training a conditional random fields (CRF) model that is capable of identifying translation or transliteration in mixed-code snippets returned by search engines (e.g., Google or Bing). The system uses a small set of phrase-translation pairs to obtain search engine snippets that may contain both an English term and its Chinese translation from search engines. The snippets then are tagged automatically to train a CRF sequence labeler. We describe the training process in more detail in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "At run-time, we start with a given phrase (e.g., \"named-entity recognition\"), which is transformed into a query with a setup to retrieve webpages in the target language (e.g., Chinese). We then retrieve mixed-code snippets returned by the search engine and extract translations within the snippets. The identified translations can be used to supplement a bilingual terminology bank (e.g., adding multilingual titles to existing Wikipedia); alternatively, they can be used as additional training data for a machine translation system, as described in Lin, Zhao, Van Durme, and Pa\u015fca (2008) .", "cite_spans": [ { "start": 550, "end": 588, "text": "Lin, Zhao, Van Durme, and Pa\u015fca (2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Most previous works focus on extracting translation pairs where the counterpart terms appear near one another in the webpage, based on a limited set of short patterns. In our approach, we extract term and translation pairs that are near or far apart, and are not limited by a set of predefined patterns. We have evaluated our method based on English-Chinese language links in Wikipedia as the gold standard. Results show that our method produces output for 80% of the test cases with an exact match precision of 43%, outperforming previous works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of the paper is organized as follows. In the next Section 2, we survey the related work that also aimed to mine translations from the Web. In Section 3, we give brief descriptions on resources we make use of. In Section 4, we describe in detail the problem statement and the proposed method. Finally, we report evaluation results and error analysis in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In machine translation, a source text is typically translated one sentence at a time, while cross-lingual information retrieval involves phrasal translation. The proposed methods for phrase translation in the literature rely on either handcrafted bilingual dictionaries, transliteration tables, or bilingual corpora. For example, Knight and Graehl (1998) described and evaluated a multi-stage machine translation method for performing backwards transliteration of Japanese names and technical terms into English, while Bian and Chen (2000) described cross-language information access to multilingual collections on the Internet. Recently, Smadja, McKeown, and Hatzivassiloglou (1996) proposed an algorithm for producing collocation and translation pairs, including noun and verb phrases, in bilingual corpora. Similarly, Kupiec (1993) propose an algorithm for finding noun phrase correspondence in bilingual corpora for bilingual lexicography and machine translation. Koehn and Knight (2003) described a noun phrase translation subsystem that improves word-based statistical machine translation methods.", "cite_spans": [ { "start": 330, "end": 354, "text": "Knight and Graehl (1998)", "ref_id": "BIBREF10" }, { "start": 519, "end": 539, "text": "Bian and Chen (2000)", "ref_id": "BIBREF0" }, { "start": 639, "end": 683, "text": "Smadja, McKeown, and Hatzivassiloglou (1996)", "ref_id": "BIBREF22" }, { "start": 821, "end": 834, "text": "Kupiec (1993)", "ref_id": "BIBREF12" }, { "start": 968, "end": 991, "text": "Koehn and Knight (2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Some methods in the literature also have aimed to exploit mixed code webpages for word and phrase translation. Nagata, Saito, and Suzuki (2001) presented a system for finding English translations for a given Japanese technical term in search engine results. Their method extracts English phrases appearing near the given Japanese term, and it scores translation candidates based on co-occurrence counts and location. Cao and Li (2002) proposed an EM algorithm for finding translation for base noun phrases on the Web. Kwok et al. (2005) focused on named entity phrases and implemented a cross-lingual name finder based on Chinese-English webpages. Wu, Lin, and Chang (2005) proposed a method for learning a set of surface patterns to find terms and translations occurring in short distance. Mixed-code webpage snippets were obtained by querying a search engine with English terms for Chinese webpages. They discovered that the most frequent pattern is where the translation immediately followed by the source term, with the coverage rate of 46%. Their results also indicate the stricter parenthetical pattern covers less than 30% of the translation instances. Lu, Chien, and Lee (2004) proposed a method for mining terms and translations from anchor text directly or transitively. In a follow-up project, Cheng et al. (2004) proposed a method for translating unknown queries with web corpora for cross-language information retrieval. Similarly, Gravano and Henzinger (2006) also proposed systems and methods for using anchor text as parallel corpora for cross-language information retrieval.", "cite_spans": [ { "start": 111, "end": 143, "text": "Nagata, Saito, and Suzuki (2001)", "ref_id": "BIBREF19" }, { "start": 417, "end": 434, "text": "Cao and Li (2002)", "ref_id": "BIBREF1" }, { "start": 518, "end": 536, "text": "Kwok et al. (2005)", "ref_id": "BIBREF13" }, { "start": 648, "end": 673, "text": "Wu, Lin, and Chang (2005)", "ref_id": "BIBREF24" }, { "start": 1160, "end": 1185, "text": "Lu, Chien, and Lee (2004)", "ref_id": "BIBREF17" }, { "start": 1305, "end": 1324, "text": "Cheng et al. (2004)", "ref_id": "BIBREF3" }, { "start": 1445, "end": 1473, "text": "Gravano and Henzinger (2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In a study more closely related to our work, Lin et al. (2008) proposed a method that performs word alignment between Chinese translations and English phrases within parentheses in crawled webpages. Their paper also proposed a novel and automatic evaluation method based on Wikipedia. The main difference from our work is that the alignment process in Lin et al. (2008) is done heuristically using a competitive linking algorithm proposed by Melamed (2000) , while we use a learning-based approach to align words and phrases. Moreover, in their method, only parenthetical translations are considered. With only the parenthetical pattern, their method is able to extract a significant number of translation pairs from crawled webpages without a given list of target English phrases. By restricting to parenthetical surface patterns however, many translation pairs in webpages may not be captured, including term-translation pairs that are further apart. In our work, we exploit surface patterns differently as a soft constraint in a CRF model and use an approach similar to Lin et al. (2008) to evaluate our results.", "cite_spans": [ { "start": 45, "end": 62, "text": "Lin et al. (2008)", "ref_id": "BIBREF16" }, { "start": 352, "end": 369, "text": "Lin et al. (2008)", "ref_id": "BIBREF16" }, { "start": 442, "end": 456, "text": "Melamed (2000)", "ref_id": "BIBREF18" }, { "start": 1073, "end": 1090, "text": "Lin et al. (2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Researchers also have explored the hyperlinks in webpages as a source of bilingual", "sec_num": null }, { "text": "In contrast to the previous work in phrase and query translation, we present a learning-based approach that uses annotated data to develop the system. Nevertheless, we do not require human intervention to prepare the training data, but instead make use of language links in Wikipedia to automatically obtain the training data. The annotated data is further augmented with features indicative of translation and transliteration relations obtained from external lexical knowledge sources publicly-available on the Web. The trained CRF sequence labeler then is used to find translations on the Web for a given term.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Researchers also have explored the hyperlinks in webpages as a source of bilingual", "sec_num": null }, { "text": "In this work, we rely on several resources that are available on the Internet. These resources are used for different purposes: the seed data are used for obtaining and labeling training data, the gold standard is used for automatic evaluation, and the external knowledge sources are used for generating features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources", "sec_num": "3." }, { "text": "Wikipedia is an online encyclopedia compiled by volunteers around the world. Anyone on the Internet can edit existing entries or create new entries to add to Wikipedia. Owing to the number of its participants, Wikipedia has achieved both high quantity and a quality comparable to traditional encyclopedias compiled by experts (Giles, 2005) . Due to these reasons, Wikipedia has become the largest and most popular reference tool.", "cite_spans": [ { "start": 326, "end": 339, "text": "(Giles, 2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Wikipedia", "sec_num": "3.1" }, { "text": "We extracted bilingual title pairs from the English and Chinese editions of Wikipedia as the gold standard for evaluation and as seeds to automatically collect and label training data from the Internet by querying search engines. Entries on the same topic among different language editions of Wikipedia are interlinked via the so-called language links. Nevertheless, only a small percentage of English articles are linked to editions of other languages. The Chinese Wikipedia contains only 398,206 articles, making it roughly one-tenth the size of the English Wikipedia. Furthermore, only 5% of the entries in the English Wikipedia contain language links to their Chinese counterparts. The proposed method can be used to find the translations of those English terms, thus speeding up the process of building a more complete multilingual Wikipedia. As will be described in Section 4, we extracted the titles of English-Chinese article pairs connected by language links for training and testing purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia", "sec_num": "3.1" }, { "text": "The content of Wikipedia is freely downloadable online. 1 We used the Google Freebase Wikipedia Extraction (WEX) instead of the official raw dump. The WEX is a processed version of the official dump, with the Wikipedia syntax transformed into XML. The WEX database can be freely downloaded online. 2", "cite_spans": [ { "start": 56, "end": 57, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Wikipedia", "sec_num": "3.1" }, { "text": "WordNet is a freely available, handcrafted lexical semantic database for English. 3 Starting its development in 1985 at Princeton University by a team of cognition scientists, WordNet was originally intended to support psycho-linguistic research. Over the years, WordNet has become increasingly popular in the fields of information retrieval, natural language processing, and artificial intelligent. Through each release, WordNet has grown into a comprehensive database of concepts in the English language. As of today, the stable 3.0 version of WordNet contains 207,000 semantic relations between 150,000 words organized in over 115,000 senses.", "cite_spans": [ { "start": 82, "end": 83, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "WordNet", "sec_num": "3.2" }, { "text": "Senses inWordNet are represented as synonym sets (synsets). A synset with a definition contains one or more words, or lemmas, that express the same meaning. In addition, WordNet on the Web based on Conditional Random Fields provides other information for each synset, including example sentences and estimated frequency. For example, the synset {block, city_block} is defined as a rectangular area in a city surrounded by streets, whereas synset {block, cube} is defined as a three-dimensional shape with six square or rectangular sides. WordNet also records various semantic relations between its senses. These relations includes hypernyms, hyponyms, coordinate terms, holonym and meronym.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet", "sec_num": "3.2" }, { "text": "The Sinica Bilingual WordNet is part of the publicly accessible Sinica Bilingual Ontological WordNet (Sinica BOW) (Huang, 2003) . In this work, we treat the Sinica Bilingual WordNet as a bilingual dictionary, and use it as an external knowledge source to generate features for training the CRF model.", "cite_spans": [ { "start": 114, "end": 127, "text": "(Huang, 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Sinica Bilingual WordNet", "sec_num": "3.3" }, { "text": "The Sinica Bilingual WordNet is a hand-crafted English-Chinese version of the original Princeton WordNet 1.6. It was compiled by collecting all possible Chinese translations of a synset's lemmas from various online bilingual dictionaries before a team of translators manually edited the acquired translations. For each synset, the translators selected at most three appropriate lexicalized words as translation equivalents. The Sinica BOW system can be freely-accessible online. 4 The Sinica Bilingual WordNet database can also be licensed for download. 5", "cite_spans": [ { "start": 479, "end": 480, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Sinica Bilingual WordNet", "sec_num": "3.3" }, { "text": "The NICT Bilingual Technical Term Database is a resource freely available online. 6 In addition to the Sinica Bilingual WordNet, we also used the NICT database to generate features. While the Sinica Bilingual WordNet mainly contains common nouns, the NICT database mainly contains technical terms and proper nouns. By combining the two resources, we can generate translational features covering both common nouns and proper nouns.", "cite_spans": [ { "start": 82, "end": 83, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "NICT Bilingual Technical Term Database", "sec_num": "3.4" }, { "text": "The NICT Bilingual Technical Term Database is maintained by committees in the National Academy for Educational Research of Taiwan (formerly National Institute for Compilation and Translation). The goal is to pursuit more uniform and standardized translations for technical terms used in textbooks, patents, national standards, and open source software. It contains over 1.1 million Chinese-English term translation pairs arranged into 72 categories (Table 9 ) and is kept up to date by constantly including more terms. Any user can suggest a new term and translation to the committees to be added to the database.", "cite_spans": [], "ref_spans": [ { "start": 449, "end": 457, "text": "(Table 9", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "NICT Bilingual Technical Term Database", "sec_num": "3.4" }, { "text": "In 2006, Google published a ngram dataset based on public webpages through Linguistics Data Consortium for licensing. 7 The Google Web 1T corpus is a 24 GB (gzip compressed) corpus that consists of n-grams ranging from unigram to five-grams generated from approximately 1 trillion words in publicly accessible Web pages. In this work, we use the Web 1T corpus to filter unlinked entries in the English Wikipedia with high frequency on the Web for manual evaluation.", "cite_spans": [ { "start": 118, "end": 119, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Google Web 1T N-grams", "sec_num": "3.5" }, { "text": "Submitting an English phrase (e.g., \"named-entity recognition\") to search engines to find translations or transliteration is a good strategy used by many translators (Quah, 2006) . Unfortunately, the user has to sift through snippets to find the translations. Such translations usually exhibit characteristics related to word translation, word transliteration, surface patterns, and proximity to the occurrences of the given phrase. To find translations for a given term on the Web, a promising approach is automatically learning to extract phrasal translations or transliterations of a given query using the conditional random fields (CRF) model. To avoid human effort in preparing annotated data for training the model, we use an automatic procedure to retrieve and tag mixed-code search engine snippets using a set of bilingual Wikipedia titles. We also propose using external knowledge sources (i.e., bilingual dictionaries, name lists and terminology banks) to generate translational and transliterational features.", "cite_spans": [ { "start": 166, "end": 178, "text": "(Quah, 2006)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4." }, { "text": "We focus on the issue of finding translations in mixed code snippets returned by a search engine. The translations are identified, tallied, ranked, and returned as the output of the system. The returned translations can be used to supplement existing multilingual terminology banks, or used as additional training data for a machine translation system. Therefore, our goal is to return several reasonably precise translations that are available on the Web for the given phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "4.1" }, { "text": "Problem Statement: Given a phrasal term P and a full-text search engine SE (e.g., Bing or Google) that operates over a mixed-code document collection (e.g., the Web), our goal is to retrieve a probable translation T of P via SE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "4.1" }, { "text": "For this, we extract a set of translation candidates, c 1 , ..., c m from a set of mixed-code snippets, s 1 , ..., s n returned by SE, such that these candidates are likely to be translations T of P.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "4.1" }, { "text": "(1) Retrieve mixed-code snippets and tag translations (Section 4.3.1) 2 In the rest of this section, we describe our solution to this problem. First, we briefly introduce the Conditional Random Fields (CRF) model in Section 4.2. We describe a strategy (see Figure 1 ) for obtaining training data for identifying translation in snippets returned by SE (Section 4.3.2). This strategy relies on a set of term-translation pairs for training, derived from Wikipedia language links (Section 4.3.1). We will also describe our method for exploiting external knowledge sources to generate translation features (Section 4.3.2), transliteration features (Section 4.3.3), and distance features (Section 4.3.4) for sequence labeling. Finally, in Section 4.4, we describe how to extract and filter translations at run-time by applying the trained sequence labeler.", "cite_spans": [], "ref_spans": [ { "start": 257, "end": 265, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "Sequence labeling is the task of assigning labels from a finite set of categories to a sequence of observations. This problem is encountered in the field of computational linguistics, as well as in many other fields, including bio-informatics, speech recognition, and pattern recognition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "4.2" }, { "text": "Traditionally, the sequence labeling problem are often solved using the Hidden Markov Model (HMM) or Maximum Entropy Markov Model (MEMM). Both HMM and MEMM are directed graph models in which every outcome is conditioned on the corresponding observation node and the previous outcomes (i.e., Markov property).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "4.2" }, { "text": "Conditional Random Fields (CRF), proposed by Lafferty, McCallum, and Pereira (2001) , is considered the state-of-the-art sequence labeling algorithm. One of the major differences of CRF is that it is modeled as an undirected graph. For sequence labeling, the CRF graph is structured as an undirected linear chain (chained CRF). CRF obeys the Markov property with respect to the undirected graph, as every outcome is conditioned on its neighboring outcomes and potentially the entire observation sequence. In our case, the outcomes are B, I, O labels that indicate a sequence of Chinese characters in the search engine snippets that is likely the translation or transliteration of the given English term. The information available (the observable) for sequence labeling are the characters in the snippets themselves, and the three types of features we generate. ", "cite_spans": [ { "start": 45, "end": 83, "text": "Lafferty, McCallum, and Pereira (2001)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "4.2" }, { "text": "We attempt to learn to find translations or transliterations for given phrases on the Web. For this, we make use of language links in Wikipedia to obtain seed data, retrieve mixed-code snippets returned by a search engine, and augment feature values based on external knowledge sources. Our learning process is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 320, "end": 328, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Preparing Data for CRF Classifier", "sec_num": "4.3" }, { "text": "In the first stage of the training phase, we extracted Wikipedia English titles and their Chinese counterparts using the language links as the seed data for training. We use the English titles to query a search engine (e.g., Google or Bing) with the target Web page language set to Chinese. This strategy will bias the search engine to return Chinese web pages interspersed with some English phrases. We then automatically labeled each Chinese character in the returned snippets, using the common BIO notation, with B, I, O indicating the beginning, inside, and outside of translations, respectively (e.g., \u652f\u63f4\u5411\uf97e\u6a5f zhiyuan-xiangliang-ji). An additional E tag is used to indicate the occurrences of the given term (e.g., support vector machine). The output of this stage is a set of tagged snippets that can be used to train a statistical sequence classifier for identifying translations. A sample of two tagged snippets, automatically generated from bilingual Wikipedia titles are shown in Figure 3 . The E tags are designed to provide proximity cues for labeling the translation and capture common surface patterns of the phrase and translation in mixed code data. For example, in Figure 3 followed by the left parenthesis and three E tags. The translation \u5149\u901a\uf97e (guangtong liang) is tagged with one B tag and two I tags, immediately followed by two E tags. Such sequences (i.e. B I I I I O E E E, and B I I E E) are two of many common patterns.", "cite_spans": [], "ref_spans": [ { "start": 988, "end": 996, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 1180, "end": 1188, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Retrieving and Tagging Snippets", "sec_num": "4.3.1" }, { "text": "1. \u20261995/O \uf98e/O \u63d0/O \u51fa/O \u7684/O \u652f/B \u6301/I \u5411/I \uf97e/I \u6a5f/I (/O support/E vector/E machine/E\uff0c/O SVM/O)/O \u4ee5/O \u8a13/O \uf996/O \u2026 2. \u2026\u767c/O \u5149/O \u539f/O \uf9e4/O \uf967/O \u540c/O\u3002/O \u5149/B \u901a/I \uf97e/I luminous/E flux/E \u5149/O \u6e90/O \u5728/O \u55ae/O \u4f4d/O \u6642/O \u9593/O \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Retrieving and Tagging Snippets", "sec_num": "4.3.1" }, { "text": "Note that we do not attempt to produce word alignment information, as done in Lin et al. (2008) . In contrast, we only use the BIO labeling scheme to indicate phrasal translations, leading to a smaller number of parameters required to be estimated during the training process.", "cite_spans": [ { "start": 78, "end": 95, "text": "Lin et al. (2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Retrieving and Tagging Snippets", "sec_num": "4.3.1" }, { "text": "We generate translation features using external bilingual resources with the 2 \u03c6 score proposed by Gale and Church (1991) to measure the correlations between an English word and a Chinese character: ( 1)where e is an English word and f is a Chinese character occurring in bilingual phrase pairs. Table 1 with only three entries to explain how the probabilities are calculated. We treat each entry in the dictionary as an event, and calculate the probability of each Chinese character and English word by counting the number of events containing them, as shown in Table 2 . Similarly, we can calculate the joint probability of an English word and a Chinese character by counting their co-occurrences in the dictionary. In Table 3 , we show the contingency table calculated by counting co-occurrences in Bilingual WordNet and NICT termbank for (\u5411 xiang, vector) , (\uf97e liang, vector) , and (\u6a5f ji, machine). The statistical association between an English word (e.g., vector) and its translation (e.g., \u5411 (xiang)) is indicated by the high count of co-occurrences, as well as the lower values of two inverse diagonal cells. From the contingency tables, we can calculate the corresponding 2 \u03c6 scores for \u5411 xiang \uf97e liang, and \u6a5f ji: 0.06530, 0.02880, and 0.09068. ", "cite_spans": [ { "start": 99, "end": 121, "text": "Gale and Church (1991)", "ref_id": "BIBREF5" }, { "start": 842, "end": 859, "text": "(\u5411 xiang, vector)", "ref_id": null }, { "start": 862, "end": 879, "text": "(\uf97e liang, vector)", "ref_id": null } ], "ref_spans": [ { "start": 296, "end": 303, "text": "Table 1", "ref_id": null }, { "start": 563, "end": 570, "text": "Table 2", "ref_id": null }, { "start": 721, "end": 728, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Generating Translation Features", "sec_num": "4.3.2" }, { "text": "where e is a word in the given English phrase E, and f is the Chinese character in a snippet. This feature value is rounded to a whole number in order to limit the number of distinct feature values. In Table 4 , we show the 2 \u03c6 scores of each Chinese character in snippets from searching Google with the given terms, i.e., support vector machine and luminous flux. Notice that there are some noisy feature values in the second example: the Chinese characters in the word \u767c\u5149 (faguang, glow or illuminate) has non-zero 2 \u03c6 scores. However, the tagger potentially can overcome such noise by relying on other features, such as the distance feature (Section 4.3.4). Moreover, in most cases there are multiple snippets for a given term, from which we can confidently identify the translations with higher frequencies. As an example, we Figure 4 . In this example, the translation characters are given feature values ranging from 2 to 7, while non-translation ones are mostly 0.", "cite_spans": [], "ref_spans": [ { "start": 202, "end": 209, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 830, "end": 838, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Generating Translation Features", "sec_num": "4.3.2" }, { "text": "1. ... 1995/0 \uf98e/0 \u63d0/0 \u51fa/0 \u7684/0 \u652f/7 \u6301/2 \u5411/6 \uf97e/5 \u6a5f/7 (/0 support/E vector/E machine/E\uff0c/0 SVM/0 )/0 \u4ee5/0 \u8a13/0 \uf996/0 ... 2. ... \u767c/0 \u5149/5 \u539f/0 \uf9e4/0 \uf967/0 \u540c/0 \u3002/0 \u5149/5 \u901a/7 \uf97e/5 luminous/E flux/E \u5149/5 \u6e90/0 \u5728/0 \u55ae/0 \u4f4d/0 \u6642/0 \u9593/0 ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Translation Features", "sec_num": "4.3.2" }, { "text": "the terms \"support vector machine\" and \"luminous flux\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4. Example of two snippets tagged with translation features given", "sec_num": null }, { "text": "We generate the additional features related to transliteration using some external knowledge resources. It is important to include transliteration in the feature set, since many named entities or technical terms are transliterated in full or partially into a foreign language. Thus, the translation feature described in Section 4.3.2 alone is not enough. For this, we collect transliterated titles from the entries connected with language links across the English and the Chinese Wikipedia to calculate correlation between the target transliteration characters and English sublexical strings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Transliteration Features", "sec_num": "4.3.3" }, { "text": "We observed that names of persons and geographic locations are mostly transliterated, and that the entries titled with names of persons or locations can be extracted easily from Wikipedia using the categories of each entry. As will be described in Section 5, we extracted Wikipedia articles tagged with categories that match \"Birth in ...\" to find articles describing a person, and categories that matches \"Cities in ...\" and \"Capitals in ...\" to find titles describing a geographic location. We show some named entities in Table 6 . After obtaining the transliteration pairs from Wikipedia, we align the Chinese and English syllables. In Chinese, every character always represents one syllable. Nevertheless, the counterpart \"syllables\" in an English word are not as easy to determine. These counterparts are not syllables in the regular sense, for some counterpart \"syllables\" may contain a single consonant. We assume every extracted Chinese and English transliteration pairs contain the same number of syllables, i.e., equal to the number of Chinese characters. We also assume the syllables are transliterated in order. Under these assumptions, we can segment the English words into a number of segments equal to the number of characters in its Chinese transliteration, and align the English segments and Chinese characters in order. For example, as shown in Table 5 , the English name Joseph is transliterated into three Chinese characters, or syllables, \u55ac\u745f\u592b qiao-se-fu, therefore, all possible segmentations include: j-o-seph, j-os-eph, j-ose-ph, j-osep-h, jo-s-eph, jo-se-ph, jo-sep-h, jos-e-ph, ..., etc.", "cite_spans": [], "ref_spans": [ { "start": 524, "end": 531, "text": "Table 6", "ref_id": null }, { "start": 1363, "end": 1370, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Generating Transliteration Features", "sec_num": "4.3.3" }, { "text": "We use the Expectation-Maximization (EM) algorithm to estimate the conditional probabilities P( f|e) modeling the correlation between the Romanized Chinese characters and the English counterpart. For Chinese characters that have ambiguous pronunciations, we use the Romanization of the most frequent pronunciation according to the Chinese Electronic Dictionary from Academia Sinica, available for download from the The Association for Computational Linguistics and Chinese Language Processing (ACLCLP). 8 In the E-step, the expectation of the log-likelihood of each segmentation candidates are evaluated using the current estimation of P( f|e). In the M-step, the conditional probability estimations are updated based on the maximum likelihood estimation (MLE) of the E-step. A few examples of the segmentation results are shown in Table 6 . After aligning the syllables in the transliteration pairs, we then calculate the conditional probability of the Romanized Chinese character and its English counterpart. Example output of three Romanized Chinese characters and their top English counterparts is shown in Table 7 . Nevertheless, generating transliteration features for each Chinese character (Romanized) tends to produce a lot of false positives. Therefore, we assume that a named entity is transliterated into at least two Chinese characters, and generate the transliteration features of a Chinese character taking into consideration the preceding and following characters. Admittedly, we probably missed some transliteration cases, such as Jean and \u7434 (qin), but that represents a small loss.", "cite_spans": [ { "start": 503, "end": 504, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 832, "end": 839, "text": "Table 6", "ref_id": null }, { "start": 1111, "end": 1118, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Generating Transliteration Features", "sec_num": "4.3.3" }, { "text": "In general, this strategy works quite well for our purpose. For example, given the character sequence \u55ac\u5e03\u65af(qiao-bu-si) and the term Steve Jobs, to calculate the transliteration score for the Chinese character \u5e03(bu), we calculate the probability of \u55ac\u5e03(qiao-bu) and \u5e03 \u65af(bu-si) being part of transliteration of Steve or Jobs: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( | ) ( ( | ), ( | )) ( | ) ( ( | ), (", "eq_num": "| )" } ], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "To calculate the conditional probability for the Chinese bi-characters \u55ac\u5e03 qiao-bu given the English term jobs, we generate all substring xy of jobs, into which qiao-bu can be transliterated:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "( | ) ( ( | )|( | ) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "xy jobs ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "1. ... \u6cd5-fa/0 \u570b-guo/0 \uf9f7-li/0 \u9ad4-ti/2 \u4e3b-zhu/2 \u7fa9-yi/0 \u756b-hua/0 \u5bb6jia/4 \u55ac-qiao/7 \u6cbb-zhi/7 \uff0e/0 \u5e03-bu/8 \uf925-la/8 \u514b-ke/4 (/0 georges/E braque/E)/0 ... 2. ... \u7b2c-di/0 62/0 \u5c46-jie/0 \u827e-ai/3 \u7f8e-mei/3 \u734e-jiang/0 \u9812-ban/0 \u734ejiang/0 \u5178-dian/0 \uf9b6-li/0 \u300b/0(/0 the/0 62nd/0 Emmy/E Award/E )/0 ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "We show two examples of the data tagged with transliteration feature values in Figure 5 . In the first example, given the phrase Georges Braque, the name of a French painter, to find its Chinese transliteration \"\u55ac\u6cbb\uff0e\u5e03\uf925\u514b (qiao-zhi bu-la-ke)\". The respective feature scores for each of the characters in the transliteration are 7 7 0 8 8 4. The symbol \"\uff0e\" with a feature value of zero, is commonly used in Chinese name transliteration to identify the boundary of first and last name in foreign names, and it can be identified as part of the answer by its surrounding transliteration feature scores and the surface pattern. Also in the first example, the Chinese character \u5bb6(jia), the second syllable of \u756b\u5bb6(hua-jia, painter), has a noisy non-zero feature value of four, due to the fact that the English syllable geo is often transliterated into this Chinese syllable jia. In the second example, the given phrase is Emmy Award, where the first part of the phrase Emmy is transliterated into \u827e\u7f8e(ai-mei), and the second part of the phrase Award is translated in to \u734e(jiang). The Chinese characters \u827e and \u7f8e both have a feature value of 3, while all other characters in the example have a feature value of zero. We also show this example tagged with all types of feature values we generate in Table 8 .", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 87, "text": "Figure 5", "ref_id": null }, { "start": 1284, "end": 1291, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Figure 5. Example of transliteration features given Georges Braque to find the Chinese transliteration \"\u55ac\u6cbb\uff0e\u5e03\uf925\u514b\" and given Emmy Award to find \"\u827e\u7f8e\u734e\"", "sec_num": null }, { "text": "Finally, we generate the distance features and train a CRF model. The distance feature is intended to exploit the fact that translations tend to occur near the source term, as pointed out in Nagata et al. (2001) and Wu et al. (2005) . Therefore, we incorporated the distance as an additional feature type, to impose a soft constraint on the locational relations between a translation and its English counterpart.", "cite_spans": [ { "start": 191, "end": 211, "text": "Nagata et al. (2001)", "ref_id": "BIBREF19" }, { "start": 216, "end": 232, "text": "Wu et al. (2005)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Generating Distance Features", "sec_num": "4.3.4" }, { "text": "An example showing all three kinds of features and labels is shown in Table 8 . This example shows that the given term Emmy Award has a Chinese counterpart that is part transliteration (Emmy with a transliteration \u827e\u7f8e ai-mei) and part translation (Award with the translation \u734e jiang). This is a typical case that our method is designed to handle using both ", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Generating Distance Features", "sec_num": "4.3.4" }, { "text": "\u827e 3 0 11 B (Emmy) \u7f8e 3 0 10 I (Award) \u734e 0 5 9 I \u9812 0 0 8 O (awarding) \u734e 0 0 7 O \u5178 0 0 6 O (ceremony) \uf9b6 0 0 5 O \u300b 0 0 4 O ( 0 0 3 O the 0 0 2 O 62nd 0 0 1 O Emmy 0 0 0 E Award 0 0 0 E ) 0 0 -1 O", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Distance Features", "sec_num": "4.3.4" }, { "text": "Once the CRF model is automatically trained, we attempt to find translations for a given phrase using the procedure in Figure 6 .", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 127, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Runtime Translation Extraction", "sec_num": "4.4" }, { "text": "In", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Runtime Translation Extraction", "sec_num": "4.4" }, { "text": "Step 1, the system submit the given phrase as query to a search engine (SE) to retrieve snippets. Then, for each token in each snippet, we generate three kinds of features (Step 2). This process is exactly the same as in the training phase. In Step 3, we run the CRF model on the snippets to generate labels. Then, in Step 4, we extract the Chinese strings with a sequence of B, I, ..., I tags as translation candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Runtime Translation Extraction", "sec_num": "4.4" }, { "text": "Step 5, we compute the frequency of all of the candidates identified in all snippets, and output the candidate with the highest frequency as output. When there is a tie on the Web based on Conditional Random Fields For transliterational features, we extracted person or location entries in Wikipedia using such categories as \"Birth in ...\" to find titles for a person, and categories such as \"Cities in ...\" and \"Capitals in ...\" to find titles for a geographic location. A total of some 15,000 bilingual person names and 24,000 bilingual place names were obtained and forced aligned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finally, in", "sec_num": null }, { "text": "To compare our method with previous work, we used a similar evaluation procedure as described in Lin et al. (2008) . We ran the system and produced the translations for these 2,181 test data, and we automatically evaluated the results using the metrics of coverage and exact match precision based on the Wikipedia language links. We removed all search snippets from the wikipedia.org domain to ensure a strict separation of training and test datasets. This precision rate is an underestimation since a term may have many alternative translations that do not match exactly with the single reference translation. To obtain a more accurate estimate of the real precision rate, we resorted to manual evaluation.", "cite_spans": [ { "start": 97, "end": 114, "text": "Lin et al. (2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Finally, in", "sec_num": null }, { "text": "We selected a small part of the 2,181 English phrases and manually evaluated the results. We report the results of automatic evaluation in Section 5.1 and the results of manual evaluation in Section 5.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finally, in", "sec_num": null }, { "text": "In this section, we describe the evaluation based on the set of 2,181 English-Chinese title pairs extracted from Wikipedia as the gold standard and automatically evaluate coverage (applicability) and exact match precision. Coverage is measured by the percentage of titles for which the proposed system produces some translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "When translations were extracted, we selected the most frequent translations as output, and checked for exact match against the reference answer. Table 10 shows the results we obtained as compared to the results reported by Lin et al. (2008) .", "cite_spans": [ { "start": 224, "end": 241, "text": "Lin et al. (2008)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 146, "end": 154, "text": "Table 10", "ref_id": null } ], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "We explored the performance differences of the systems employing different set of features. The systems evaluated are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "\u2022 Full: the proposed system trained with all feature types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "\u2022 -TL : the proposed system trained without the transliteration feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "\u2022 -TR : the proposed system trained without the translation feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "\u2022 -TL-TR : the proposed system only using the distance feature. No external knowledge used. \u2022 NICT : the freely available NICT technical term bilingual dictionary with 1,138,653 translation pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "Notice that, although Lin et al. (2008) also used bilingual Wikipedia title pairs for evaluation, they used an earlier snapshot of Wikipedia and worked with full webpages crawled from the Internet without a list of given terms. We worked with the list of English terms given as input, but worked only with search engine snippets. In the previous work, all of the bilingual title pairs extracted from Wikipedia were used for evaluation. In our work, only a portion of the title pairs were used for evaluation and the rest were used for generating the training data. It is often difficult to compare systems with different experimental settings. Nevertheless, the evaluation results seem to indicate that the proposed method compares favorably with the results reported in the previous work.", "cite_spans": [ { "start": 22, "end": 39, "text": "Lin et al. (2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "With a given target English term as input, the proposed system uses a search engine to retrieve a relevant portion of limited webpages, and attempts to find the Chinese translation within the retrieved text. The proposed system extracts translations in all cases without being limited by a set of a few surface patterns, and has a significantly higher coverage and precision rate than the previous method that rely on the parenthetic patterns only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "As shown in Table 10 , we found using external knowledge to generate features improves system performance significantly. Adding translation feature (-TL) or transliteration feature (-TR) improves exact match precision by about 6% and 16%, respectively. Due to the fact that many Wikipedia titles are fully or partially transliterated into Chinese, the transliteration feature was found to be more important than the translation feature.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Table 10", "ref_id": null } ], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "The results also clearly show that finding translations on the Web has the advantage of better coverage than simply looking up phrases in a terminology bank (with a coverage rate of 24%), or a bilingual dictionary (with a coverage rate of 11%). Although using the NICT terminology bank or LDC bilingual dictionary directly has the worst performance, using them as external knowledge sources improves the performance of the CRF model significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "Overall, the full system performed the best, finding translations for 8 out of 10 phrases with an average exact match precision rate of over 40%. Nearly 60% of the exact matches appear in the Top 5 candidates. Leaving out the transliteration feature degraded the precision rate by 16%, far more than leaving out the translation feature. This is to be expected, since English Wikipedia has considerably more named entities with transliterated counterparts in Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "In this section, we present two sets of manual evaluation. In Section 5.2.1, we manually evaluate the results produced by the full system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Evaluation", "sec_num": "5.2" }, { "text": "Since an English phrase is often translated into several Chinese counterparts, evaluation based on exact match against a single reference answer leads to under-estimation. Therefore, we asked a human judge to examine and mark the output of our full system. The judge was instructed to mark each output as A: correct translation alternative, B: correct translation but with a difference sense from the reference, P: partially correct translation, and E: incorrect translation. Table 11 shows 24 randomly selected translations that do not match the relevant reference translations. Half of the translations (12) are correct translations (A and B), while a third (8) are partially correct translation (P). Notice that it is a common practice to translate only the surname of a foreign person. So, four of the eight partial translations may be considered as correct.", "cite_spans": [], "ref_spans": [ { "start": 476, "end": 484, "text": "Table 11", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Error Analysis on Automatic Evaluation", "sec_num": "5.2.1" }, { "text": "In Table 12 , we show extracted candidates and frequency counts for 8 example terms. Translation candidates are marked using the same A, B, P, and E tags as in Table 11 , plus an additional tag, M, to indicate an exact match. For the given term money laundering, the system extracted 27 exact matches (\u6d17\u9322), and 2 correct alternatives (\u6d17\u9ed1\u9322) and only 1 erroneous output from 30 snippets returned from the search engine. While technical terms like money laundering tend to have literal translations and result in more exact matches, movie titles are often translated into Chinese with completely different meanings. For example, the official Chinese title for the movie, Music and Lyrics in Taiwan is \"K-\u6b4c -\u60c5 \u4eba \" (meaning karaoke-song-lover). Given such a title as input, the system was able to extract 18 partial Learning to Find Translations and Transliterations 41 on the Web based on Conditional Random Fields matches and 2 exact matches base on surface patterns and modest translation feature value for music and \u6b4c(ge, song). For the given term colony, the system extracted \u83cc\uf918(colony of fungi or bacteria), a correct translation with a different sense. Other extracted answers include: transliteration, \u79d1\uf90f\u5c3c\u6d77\u5cf6\u9152\u5e97(Island Colony), the name of a hotel, and the exact-match translation, \u6b96\u6c11\u5730(foreign control territory). For the given term bubble sort, the partial translation \u6392\u5e8f(sort) makes the top-1 translation (with a count of 20), while the top-2 to top-5 are either exact-match or acceptable translations. Note that this learning-based approach to mining translation and transliteration on the Web is an original contribution of our work. Previous works such as Wu et al. (2005) ; Lin et al. (2008) , simply used occurrence statistics to identify translations, which is roughly equivalent to our translational or transliterational features (see Section 4.3.2 and Section 4.3.3). While Lin et al. used prefixes of 3 letters to provide a makeshift model of transliteration, we model the name-transliteration relations directly using an EM algorithm. Moreover, we also take note of their pattern of appearance to allow more effective extraction of relevant translations with the distance feature (see Section 4.3.4). It is important to note that combining features inherent in a training data, as well as derived from external knowledge sources in a machine learning model allow us to cover more relevant translations, while filtering out many invalid candidates. ", "cite_spans": [ { "start": 1662, "end": 1678, "text": "Wu et al. (2005)", "ref_id": "BIBREF24" }, { "start": 1681, "end": 1698, "text": "Lin et al. (2008)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 3, "end": 11, "text": "Table 12", "ref_id": "TABREF2" }, { "start": 160, "end": 168, "text": "Table 11", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Error Analysis on Automatic Evaluation", "sec_num": "5.2.1" }, { "text": "We have presented a new method for mining translations on the Web for a given term. In our work, we use a set of terms and translations as seeds to obtain mixed-code snippets returned by a search engine, such as Google or Bing. We then automatically convert the snippets into a tagged sequence of tokens, automatically augment the data with features obtained from external knowledge sources, and automatically train a CRF model for sequence labels. At runtime, we submit a query consisting of the given term to a search engine, tag the returned snippets using the trained model, and finally extract and rank the translation candidates for output. Preliminary experiments and evaluations show our method cleanly combining various features, resulting in an integrated, learning-based system capable of finding both term translations and transliterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "Many avenues exist for future research and improvement of our system. For example, existing query expansion methods to retrieve more webpages containing translation for the given phrases could be implemented (Zhang et al., 2005) . Translation features related to word parts (e.g., -lite in the term zeolite) could be used to improve identification of translations. Additionally, an interesting direction to explore is to identify phrase types and length (i.e., base NP and NP prep. NP) and train type-specific CRF models for better results. In addition, natural language processing techniques such as word stemming, word lemmatization, or derivational morphological transformation could also be attempted to improve recall and precision.", "cite_spans": [ { "start": 208, "end": 228, "text": "(Zhang et al., 2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "Another interesting direction to explore is using a robot to crawl webpages and filter mixed-code data to derive the translation features. With the crawled web pages, we can extract translations offline, without having to work with a search engine and its limited returned snippets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "Yet another direction of research would be to enhance the effectiveness of translation features by working on the level of Chinese words instead of characters. For that, we could either use an existing, general-purpose word segmenter or carry out self-organized word segmentation (Sproat & Shih, 1990) to produce word-based translation features.", "cite_spans": [ { "start": 280, "end": 301, "text": "(Sproat & Shih, 1990)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "http://en.wikipedia.org/wiki/Wikipedia:Database_download 2 http://wiki.freebase.com/wiki/WEX 3 http://wordnet.princeton.edu/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://BOW.sinica.edu.tw/ 5 http://www.aclclp.org.tw/doc/bw_agr_e.PDF 6 http://terms.nict.gov.tw/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2006T13", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.aclclp.org.tw/use_ced.php", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "with multiple candidates with the same highest frequency, one of them is randomly selected as the output.Procedure FindTranslation(P, SE):(1)Submit P as a query to SE to retrieve a set of mixed-code snippets s 1 , s 2 , s 3 , ..., s n for each snippet s i in snippets s 1 , s 2 , s 3 , ..., s n : for each Chinese character in s i : 2Generate the three features base on P(3)Run the CRF model on snippets with features for BIO labels for each snippet s i in snippets s 1 , s 2 , s 3 , ..., s n : 4Extract Chinese tagged with BI sequence as candidates (5)Output the candidate with highest redundancy (frequency). (In case of a tie, randomly select one of the most frequent.) Figure 6 . Pseudocode of the runtime phase.", "cite_spans": [], "ref_spans": [ { "start": 673, "end": 681, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "We extracted the titles of English and Chinese articles that are connected through language links in Wikipedia using the Wikipedia dump created on 2010/08/16 (Google, 2010). We used a short list of stop words based on the rules pointed out by Lin et al. (2008) to exclude titles that are for administrative or other purposes. We obtained a total of 155,310 article pairs, from which we randomly selected 13,150 and 2,181 titles as seeds to obtain the training and test data, respectively, as described in Section 4.3.1. We then used the English-Chinese Bilingual WordNet 9 and NICT terminology bank (terms.nict.gov.tw/download_main.php) to generate translational features, in an effort to cover both common nouns and technical terms. The bilingual WordNet, translated from the original Princeton WordNet 1.6 has 99,642 synset entries, each with multiple lemmas and multiple translations, forming a total of some 850,000 translation pairs. The NICT database has over 1.1 million term translation pairs in 72 categories and covers a wide variety of different fields. See Table 9 for the numbers of entries in each of the 72 categories.9 http://www.aclclp.org.tw/doc/bw_agr_e.PDF", "cite_spans": [ { "start": 243, "end": 260, "text": "Lin et al. (2008)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 1069, "end": 1076, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Cross-language information access to multilingual collections on the internet", "authors": [ { "first": "G.-W", "middle": [], "last": "Bian", "suffix": "" }, { "first": "H.-H", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2000, "venue": "Journal of the American Society for Information Science", "volume": "51", "issue": "3", "pages": "281--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bian, G.-W., & Chen, H.-H. (2000). Cross-language information access to multilingual collections on the internet. Journal of the American Society for Information Science, 51(3), 281-296.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Base noun phrase translation using web data and the em algorithm", "authors": [ { "first": "Y", "middle": [], "last": "Cao", "suffix": "" }, { "first": "H", "middle": [], "last": "Li", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th international conference on computational linguistics", "volume": "1", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cao, Y., & Li, H. (2002). Base noun phrase translation using web data and the em algorithm. In Proceedings of the 19th international conference on computational linguistics, volume 1, 1-7.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning to find translations and transliterations on the web", "authors": [ { "first": "J", "middle": [ "Z" ], "last": "Chang", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Jang", "suffix": "" }, { "first": ".-S", "middle": [], "last": "", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th annual meeting of the association for computational linguistics", "volume": "2", "issue": "", "pages": "130--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, J. Z., Chang, J. S., & Jang, R. J.-S. (2012). Learning to find translations and transliterations on the web. In Proceedings of the 50th annual meeting of the association for computational linguistics, volume 2, 130-134.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Translating unknown queries with web corpora for cross-language information retrieval", "authors": [ { "first": "P.-J", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "J.-W", "middle": [], "last": "Teng", "suffix": "" }, { "first": "R.-C", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J.-H", "middle": [], "last": "Wang", "suffix": "" }, { "first": "W.-H", "middle": [], "last": "Lu", "suffix": "" }, { "first": "L.-F", "middle": [], "last": "Chien", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 27th annual international acm sigir conference on research and development in information retrieval", "volume": "", "issue": "", "pages": "146--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cheng, P.-J., Teng, J.-W., Chen, R.-C., Wang, J.-H., Lu, W.-H., & Chien, L.-F. (2004). Translating unknown queries with web corpora for cross-language information retrieval. In Proceedings of the 27th annual international acm sigir conference on research and development in information retrieval, 146-153.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Wordnet: An electronic lexical database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, C. (1998). Wordnet: An electronic lexical database. MIT Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Identifying word correspondence in parallel texts", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the workshop on speech and natural language", "volume": "", "issue": "", "pages": "152--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gale, W. A., & Church, K. W. (1991). Identifying word correspondence in parallel texts. In Proceedings of the workshop on speech and natural language, 152-157.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Internet encyclopaedias go head to head", "authors": [ { "first": "J", "middle": [], "last": "Giles", "suffix": "" } ], "year": 2005, "venue": "Freebase data dumps", "volume": "438", "issue": "", "pages": "900--901", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giles, J. (2005). Internet encyclopaedias go head to head. Nature, 438(7070), 900-901. Google. (2010). Freebase data dumps (August 16th, 2010 ed.).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Systems and methods for using anchor text as parallel corpora for cross-language information retrieval", "authors": [ { "first": "L", "middle": [], "last": "Gravano", "suffix": "" }, { "first": "M", "middle": [ "H" ], "last": "Henzinger", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gravano, L., & Henzinger, M. H. (2006). Systems and methods for using anchor text as parallel corpora for cross-language information retrieval (No. 7146358).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Sinica bow: integrating bilingual wordnet and sumo ontology", "authors": [ { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2003, "venue": "Proceedings of 2003 International Conference on Natural Language Processing and Knowledge Engineering", "volume": "", "issue": "", "pages": "825--826", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.-R. (2003). Sinica bow: integrating bilingual wordnet and sumo ontology. In Proceedings of 2003 International Conference on Natural Language Processing and Knowledge Engineering, 825-826.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automatic extraction of named entity translingual equivalence based on multi-feature cost minimization", "authors": [ { "first": "F", "middle": [], "last": "Huang", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the acl 2003 workshop on multilingual and mixed-language named entity recognition", "volume": "15", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, F., Vogel, S., & Waibel, A. (2003). Automatic extraction of named entity translingual equivalence based on multi-feature cost minimization. In Proceedings of the acl 2003 workshop on multilingual and mixed-language named entity recognition, 15, 9-16.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Machine transliteration. Computational Linguistics", "authors": [ { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "J", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1998, "venue": "", "volume": "24", "issue": "", "pages": "599--612", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knight, K., & Graehl, J. (1998). Machine transliteration. Computational Linguistics, 24(4), 599-612.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Feature-rich statistical translation of noun phrases", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st annual meeting on association for computational linguistics", "volume": "1", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P., & Knight, K. (2003). Feature-rich statistical translation of noun phrases. In Proceedings of the 41st annual meeting on association for computational linguistics, volume 1, 311-318.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An algorithm for finding noun phrase correspondences in bilingual corpora", "authors": [ { "first": "J", "middle": [], "last": "Kupiec", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "17--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kupiec, J. (1993). An algorithm for finding noun phrase correspondences in bilingual corpora. In Proceedings of the 31st annual meeting on association for computational linguistics, 17-22.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Chinet: a chinese name finder system for document triage", "authors": [ { "first": "K", "middle": [], "last": "Kwok", "suffix": "" }, { "first": "P", "middle": [], "last": "Deng", "suffix": "" }, { "first": "N", "middle": [], "last": "Dinstl", "suffix": "" }, { "first": "H", "middle": [], "last": "Sun", "suffix": "" }, { "first": "W", "middle": [], "last": "Xu", "suffix": "" }, { "first": "P", "middle": [], "last": "Peng", "suffix": "" }, { "first": "J", "middle": [], "last": "Doyon", "suffix": "" } ], "year": 2005, "venue": "Proceedings of 2005 international conference on intelligence analysis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kwok, K., Deng, P., Dinstl, N., Sun, H., Xu, W., Peng, P., & Doyon., J. (2005). Chinet: a chinese name finder system for document triage. In Proceedings of 2005 international conference on intelligence analysis.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the eighteenth international conference on machine learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lafferty, J. D., McCallum, A., & Pereira, F. C. N. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the eighteenth international conference on machine learning, 282-289.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Translating chinese romanized name into Chinese idiographic characters via corpus and web validation", "authors": [ { "first": "Y", "middle": [], "last": "Li", "suffix": "" }, { "first": "G", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2005, "venue": "Proceedings of coria 2005", "volume": "", "issue": "", "pages": "323--338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Y., & Grefenstette, G. (2005). Translating chinese romanized name into Chinese idiographic characters via corpus and web validation. In Proceedings of coria 2005, 323-338.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Mining parenthetical translations from the web by word alignment", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" }, { "first": "S", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "B", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "M", "middle": [], "last": "Pa\u015fca", "suffix": "" } ], "year": 2008, "venue": "Proceedings of acl-08: Hlt, 994-1002. Learning to Find Translations and Transliterations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D., Zhao, S., Van Durme, B., & Pa\u015fca, M. (2008). Mining parenthetical translations from the web by word alignment. In Proceedings of acl-08: Hlt, 994-1002. Learning to Find Translations and Transliterations 45", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Anchor text mining for translation of web queries: A transitive translation approach", "authors": [ { "first": "W.-H", "middle": [], "last": "Lu", "suffix": "" }, { "first": "L.-F", "middle": [], "last": "Chien", "suffix": "" }, { "first": "H.-J", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "ACM Trans. Inf. Syst", "volume": "22", "issue": "2", "pages": "242--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lu, W.-H., Chien, L.-F., & Lee, H.-J. (2004). Anchor text mining for translation of web queries: A transitive translation approach. ACM Trans. Inf. Syst., 22(2), 242-269.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Models of translational equivalence among words", "authors": [ { "first": "I", "middle": [ "D" ], "last": "Melamed", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "2", "pages": "221--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melamed, I. D. (2000). Models of translational equivalence among words. Computational Linguistics, 26(2), 221-249.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Using the web as a bilingual dictionary", "authors": [ { "first": "M", "middle": [], "last": "Nagata", "suffix": "" }, { "first": "T", "middle": [], "last": "Saito", "suffix": "" }, { "first": "K", "middle": [], "last": "Suzuki", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the workshop on data-driven methods in machine translation", "volume": "14", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nagata, M., Saito, T., & Suzuki, K. (2001). Using the web as a bilingual dictionary. In Proceedings of the workshop on data-driven methods in machine translation, volume 14, 1-8.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Finding ideographic representations of Japanese names written in latin script via language identification and corpus validation", "authors": [ { "first": "Y", "middle": [], "last": "Qu", "suffix": "" }, { "first": "G", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qu, Y., & Grefenstette, G. (2004). Finding ideographic representations of Japanese names written in latin script via language identification and corpus validation. In Proceedings of the 42nd annual meeting on association for computational linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Translation and technology", "authors": [ { "first": "C", "middle": [ "K" ], "last": "Quah", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quah, C. K. (2006). Translation and technology. Palgrave Macmillan.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Translating collocations for bilingual lexicons: a statistical approach", "authors": [ { "first": "F", "middle": [], "last": "Smadja", "suffix": "" }, { "first": "K", "middle": [ "R" ], "last": "Mckeown", "suffix": "" }, { "first": "V", "middle": [], "last": "Hatzivassiloglou", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smadja, F., McKeown, K. R., & Hatzivassiloglou, V. (1996). Translating collocations for bilingual lexicons: a statistical approach. Computational Linguistics, 22(1), 1-38.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A statistical method for finding word boundaries in Chinese text", "authors": [ { "first": "R", "middle": [ "W" ], "last": "Sproat", "suffix": "" }, { "first": "C", "middle": [], "last": "Shih", "suffix": "" } ], "year": 1990, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "4", "issue": "4", "pages": "336--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, R. W., & Shih, C. (1990). A statistical method for finding word boundaries in Chinese text. Computer Processing of Chinese and Oriental Languages, 4(4), 336-351.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Learning source-target surface patterns for web-based terminology translation", "authors": [ { "first": "J.-C", "middle": [], "last": "Wu", "suffix": "" }, { "first": "T", "middle": [], "last": "Lin", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the acl 2005 on interactive poster and demonstration sessions", "volume": "", "issue": "", "pages": "37--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, J.-C., Lin, T., & Chang, J. S. (2005). Learning source-target surface patterns for web-based terminology translation. In Proceedings of the acl 2005 on interactive poster and demonstration sessions, 37-40.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Mining translations of oov terms from the web through cross-lingual query expansion", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "F", "middle": [], "last": "Huang", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 28th annual international acm sigir conference on research and development in information retrieval", "volume": "", "issue": "", "pages": "669--670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Y., Huang, F., & Vogel, S. (2005). Mining translations of oov terms from the web through cross-lingual query expansion. In Proceedings of the 28th annual international acm sigir conference on research and development in information retrieval, 669-670.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Generate translation features (Section 4.3.2) (3) Generate transliteration features (Section 4.3.3) (4) Generate distance features (Section 4.3.4) (5) Train a CRF model for classifying translations (Section 4.3.4) Outline of the training phase.", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Simplified view of HMM and CRF.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "Examples of tagged snippets for title pairs \"support vector machine\", \"\u652f\u6301\u5411\uf97e\u6a5f\" and \"luminous flux\", \"\u5149\u901a\uf97e\".", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": ", the translation \u652f\u6301\u5411\uf97e\u6a5f(zhichi xiangliang ji) is tagged with one B tag and four I tags, Learning to Find Translations and Transliterations 29 on the Web based on Conditional Random Fields", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "type_str": "table", "html": null, "text": "Learning to Find Translations and Transliterations 23 on the Web based on Conditional Random Fields information.", "num": null, "content": "" }, "TABREF2": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
ChineseEnglish
\u793e\u4ea4\u5de5\u7a0bsocial engineering
\u793e\u7fa4\u7db2\uf937social network
\u793e\u7fa4\u5a92\u9ad4social media
wCount(w) P(w)P( w )efCount(e,f) P(e,f)
\u793e31.000.00social\u793e31.00
\u7fa420.670.33social\u7fa420.67
\u4ea410.330.67social\u4ea410.33
\u7db210.330.67network\u793e10.33
social31.000.00network\u7fa410.33
media10.330.67network\u4ea400.00
network10.330.67network\u7db210.33
In our case, the 2 \u03c6 scores are calculated by counting the occurrence of Chinese
" }, "TABREF3": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
vectorvectormachine
\u54117939,960\uf97e76821,907\u6a5f3,38128,566
971,975,6421221,963,6954911,954,054
" }, "TABREF4": { "type_str": "table", "html": null, "text": "Example 2 \u03c6 scores.", "num": null, "content": "
supportvectormachineluminousflux
\u63d00.000000.000000.00000\u767c0.004320.00000
\u51fa0.000000.000000.00000\u51490.010286.0E-06
\u76840.000000.000000.00000\u539f0.000000.00000
\u652f0.090750.000000.00000\uf9e40.000000.00000
\u63010.000580.000000.00000\uf9671.4E-060.00000
\u54110.000000.065300.00000\u51490.010286.0E-06
\uf97e0.000000.028800.00000\u901a0.000000.06410
\u6a5f0.000000.000000.09067\uf97e0.000000.00793
To generate features for each token, we calculate the following logarithmic value of 2 \u03c6 :
translation feat( ) 9 f = +2 log argmax e f ( ( , )) e E \u03c6 \u2208
" }, "TABREF5": { "type_str": "table", "html": null, "text": "Learning to Find Translations and Transliterations31 on the Web based on Conditional Random Fieldsshow two snippets tagged with translation features in", "num": null, "content": "" }, "TABREF6": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
ChineseChineseEnglishPossible
TransliterationRomanizationNamed EntitySegmentations
\u55ac\u5e03\u65afqiao-bu-sijobsj-o-bs, j-ob-s, jo-b-s
\u74ca\u55acqiong-qiaojonjoj-onjo, jo-njo, jon-jo, jonj-o
\u55ac\u745f\u592bqiao-se-fujosephj-o-seph, j-os-eph, j-ose-ph, j-osep-h,
jo-s-eph, jo-se-ph, jo-sep-h,
jos-e-ph, ...
\u55ac\u51e1\u5c3cqiao-fan-nigiovannig-i-ovanni, g-io-vanni, g-iov-anni, ...,
" }, "TABREF7": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
Rom. Chinese English Tr.Cnt(f,e)P(f|e)
qiaogeo1400.38
jo660.18
joe410.11
bub10900.58
bu3010.16
br1220.07
sis56260.69
es2920.04
st2260.03
" }, "TABREF10": { "type_str": "table", "html": null, "text": "Learning to Find Translations and Transliterations35 on the Web based on Conditional Random Fieldstranslational and transliterational features. Finally, we use the labeled data with three kinds features to train a CRF model.", "num": null, "content": "
Table 8. Example training data.
wordTRTLdistance label
\u7b2c0014O
620013O
(62nd)\u5c460012O
" }, "TABREF11": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
CategoryCount CategoryCount
Pharmacy1,673 Material Science (Polymer)3,422
Bacterial Immunology2,063 Material Science (Ceramics)2,292
Phylogenetic1,756 Agricultural Machinery3,060
Psychopathology1,067 Science Education5,289
Psychology5,741 Industrial Engineering5,400
Physics/Chemistry Equipments17,279 Astronomy6,091
Comparative Anatomy6,013 Music2,922
Education2,198 Food Science and Technology35,666
Sociology2,825 Foreign Names57,054
Human Anatomy5,796 Mineralogy28,032
Pathology7,307 Lab Animal and Comparative Medicine8,220
Sports1,708 Dance10,564
Soil Science1,240 Statistic7,370
Forestry7,954 Meteorology20,061
Fertilizer Science1,155 Animal Husbandry21,466
Hydraulic Engineering4,601 Mining and Metallurgical Engineering13,914
Electronic Engineering7,627 Computer101,389
Agricultural Promotion669 Textile Science and Technology2,2761
Accounting4,884 Meteorology17,789
Civil Engineering16,745 Endocrinology2,577
Aeronautics and Astronautics23,751 Chemical Engineering22,386
Electrical Engineering20,058 Communications Engineering16,899
Engineering Graphics4,766 Biology (Plants)42,730
Mathematics16,708 Mechanism and Machine Theory2,085
Foundry5,314 Shipbuilding Engineering30,701
Mechanical Engineering35369 Physics22,077
Earth Science30673 Zoology29,586
Geology22780 Marine37,329
Marketing1667 Chemistry (Compound)19,258
Veterinary Medicine24,990 Fish29,730
Nuclear Energy38,462 Economics8,891
Production Automation2,560 Marine Geology31,015
Surveying14,371 Power Engineering69,546
Ecology7,495 Chemistry (Others)25,273
Mechanics10,716 Administration3,743
Materials Science (Metal)7,665 Journalism and Communication4,419
" }, "TABREF12": { "type_str": "table", "html": null, "text": "Ch : the results reported in the Lin et al. paper for their system targeting Chinese parenthetical translations.", "num": null, "content": "
Learning to Find Translations and Transliterations39
on the Web based on Conditional Random Fields
Table 10. Automatic evaluation results.
systemcoverageexact matchtop5 exact match
Full (En-Ch)80.4%43.0%56.4%
-TL83.9%27.5%40.2%
-TR81.2%37.4%50.3%
-TL-TR83.2%21.1%32.8%
LIN En-Ch59.6%27.9%not reported
LIN Ch-En70.8%36.4%not reported
LDC (En-Ch)10.8%4.8%N/A
NICT (En-Ch)24.2%32.1%N/A
\u2022 LDC : the LDC2.0 English to Chinese bilingual dictionary with 161,117
translation pairs. (reported in Lin et al.)
for their system
targeting English parenthetical translations
" }, "TABREF13": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
English WikiChinese WikiExtracted
Pope Celestine IV\uf96c\u840a\u65af\u5ef7\u56db\u4e16\ufa00\u840a\u65af\u5ef7\u56db\u4e16A
Huaneng Power International\u83ef\u80fd\u570b\u969b\u83ef\u80fd\u570b\u969b\u96fb\uf98aA
Shangrao\u4e0a\u9952\u5e02\u4e0a\u9952A
Aurora University\u9707\u65e6\u5927\u5b78\u5967\uf90f\uf925\u5927\u5b78A
Fujian\u798f\u5efa\uf96d\u798f\u5efaA
Dream Theater\u5922\u5287\u5834\u5922\u5287\u5834\u5408\u5531\u5718A
Coturnix\u9d89\u5c6c\u9d6a\u9d89A
Waste\u5783\u573e\u5ee2\u7269A
Allyl alcohol\u70ef\u4e19\u9187\u4e19\u70ef\u9187A
Machine\u6a5f\u68b0\u5de5\u5177\u6a5fA
Colony\u6b96\u6c11\u5730\u83cc\uf918B
Collateral\uf918\u65e5\uf970\u795e\u62b5\u62bcB
Ludwig Erhard\uf937\u5fb7\u7dad\u5e0c\uff0e\u827e\u54c8\u5fb7\u827e\u54c8\u5fb7P
John Woo\u5433\u5b87\u68ee\u7d04\u7ff0P
Osman I\u5967\u65af\u66fc\u4e00\u4e16\u5967\u65af\u66fcP
Itumeleng Khune\u4f0a\u5716\u6885\uf9d4\uff0e\u5eab\u5167\u5eab\u5167P
NaphthoquinoneP
Base analog\u9e7c\u57fa\uf9d0\u4f3c\u7269\u9e7c\u57fa\uf9d0P
Chinese Paladin\u4ed9\u528d\u5947\u4fe0\u50b3\u795e\u528dP
Bubble sort\u5192\u6ce1\u6392\u5e8f\u6392\u5e8fP
The Love Suicides at Sonezaki\u66fe\u6839\u5d0e\u60c5\u6b7b\u590f\u76ee\u6f31\u77f3E
Survivor's Law II\uf9d8\u653f\u65b0\u4eba\u738bII\uf90a\u77f3\uf97c\u7de3E
Phichit\u6279\u96c6\u5e9c\uf929\u5bb6\u5ead\u4e3b\u5a66E
Ammonium\u92a8\u904e\uf9ce\u9178\u92a8E
" }, "TABREF14": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
given termfreqcandidate
money laundering27\u6d17\u9322M
2\u6d17\u9ed1\u9322A
1\u6d17\u9322\u5ba3\u50b3E
Music and Lyrics18\u6b4c\u60c5\u4ebaP
2K\u6b4c\u60c5\u4ebaM
flyback transformer14\u8b8a\u58d3\u5668P
3\u56de\u6383\u8b8a\u58d3\u5668M
2\u8fd4\u99b3\u5f0f\u8b8a\u58d3\u5668A
2\u8fd4\u99b3\u8b8a\u58d3\u5668A
colony15\u83cc\uf918B
2\u79d1\uf90f\u5c3c\u6d77\u5cf6\u9152\u5e97B
2\u6b96\u6c11\u5730M
Osman I8\u5967\u65af\u66fcP
5\u5967\u65af\u66fc\u4e00\u4e16M
bubble sort20\u6392\u5e8fP
19\u6ce1\u6392\u5e8fA
17\u6c23\u6ce1\u6392\u5e8fM
9\u6ce1\u6cab\u6392\u5e8fA
4\u6ce1\u6ce1\u6392\u5e8fA
" } } } }