Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I05-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:25:54.527737Z"
},
"title": "The Use of Monolingual Context Vectors for Missing Translations in Cross-Language Information Retrieval",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Qu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Clairvoyance Corporation",
"location": {
"addrLine": "Suite 700",
"postCode": "5001, 15213",
"settlement": "Baum Boulevard, Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Grefenstette",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CEA",
"location": {
"addrLine": "B.P.6",
"postCode": "92265",
"settlement": "Fontenay-aux-Roses Cedex",
"country": "France"
}
},
"email": "gregory.grefenstette@cea.fr"
},
{
"first": "David",
"middle": [
"A"
],
"last": "Evans",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Clairvoyance Corporation",
"location": {
"addrLine": "Suite 700",
"postCode": "5001, 15213",
"settlement": "Baum Boulevard, Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "For cross-language text retrieval systems that rely on bilingual dictionaries for bridging the language gap between the source query language and the target document language, good bilingual dictionary coverage is imperative. For terms with missing translations, most systems employ some approaches for expanding the existing translation dictionaries. In this paper, instead of lexicon expansion, we explore whether using the context of the unknown terms can help mitigate the loss of meaning due to missing translation. Our approaches consist of two steps: (1) to identify terms that are closely associated with the unknown source language terms as context vectors and (2) to use the translations of the associated terms in the context vectors as the surrogate translations of the unknown terms. We describe a query-independent version and a query-dependent version using such monolingual context vectors. These methods are evaluated in Japanese-to-English retrieval using the NTCIR-3 topics and data sets. Empirical results show that both methods improved CLIR performance for short and medium-length queries and that the query-dependent context vectors performed better than the query-independent versions.",
"pdf_parse": {
"paper_id": "I05-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "For cross-language text retrieval systems that rely on bilingual dictionaries for bridging the language gap between the source query language and the target document language, good bilingual dictionary coverage is imperative. For terms with missing translations, most systems employ some approaches for expanding the existing translation dictionaries. In this paper, instead of lexicon expansion, we explore whether using the context of the unknown terms can help mitigate the loss of meaning due to missing translation. Our approaches consist of two steps: (1) to identify terms that are closely associated with the unknown source language terms as context vectors and (2) to use the translations of the associated terms in the context vectors as the surrogate translations of the unknown terms. We describe a query-independent version and a query-dependent version using such monolingual context vectors. These methods are evaluated in Japanese-to-English retrieval using the NTCIR-3 topics and data sets. Empirical results show that both methods improved CLIR performance for short and medium-length queries and that the query-dependent context vectors performed better than the query-independent versions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "For cross-language text retrieval systems that rely on bilingual dictionaries for bridging the language gap between the source query language and the target document language, good bilingual dictionary coverage is imperative [8, 9] . Yet, translations for proper names and special terminology are often missing in available dictionaries. Various methods have been proposed for finding translations of names and terminology through transliteration [5, 11, 13, 14, 16, 18, 20] and corpus mining [6, 7, 12, 15, 22] . In this paper, instead of attempting to find the candidate translations of terms without translations to expand existing translation dictionaries, we explore to what extent simply using text context can help mitigate the missing translation problem and for what kinds of queries. The context-oriented approaches include (1) identifying words that are closely associated with the unknown source language terms as context vectors and (2) using the translations of the associated words in the context vectors as the surrogate translations of the unknown words. We describe a query-independent version and a query-dependent version using such context vectors. We evaluate these methods in Japanese-to-English retrieval using the NTCIR-3 topics and data sets. In particular, we explore the following questions:",
"cite_spans": [
{
"start": 225,
"end": 228,
"text": "[8,",
"ref_id": "BIBREF7"
},
{
"start": 229,
"end": 231,
"text": "9]",
"ref_id": "BIBREF8"
},
{
"start": 447,
"end": 450,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 451,
"end": 454,
"text": "11,",
"ref_id": "BIBREF10"
},
{
"start": 455,
"end": 458,
"text": "13,",
"ref_id": "BIBREF12"
},
{
"start": 459,
"end": 462,
"text": "14,",
"ref_id": null
},
{
"start": 463,
"end": 466,
"text": "16,",
"ref_id": "BIBREF15"
},
{
"start": 467,
"end": 470,
"text": "18,",
"ref_id": "BIBREF17"
},
{
"start": 471,
"end": 474,
"text": "20]",
"ref_id": "BIBREF19"
},
{
"start": 493,
"end": 496,
"text": "[6,",
"ref_id": "BIBREF5"
},
{
"start": 497,
"end": 499,
"text": "7,",
"ref_id": "BIBREF6"
},
{
"start": 500,
"end": 503,
"text": "12,",
"ref_id": "BIBREF11"
},
{
"start": 504,
"end": 507,
"text": "15,",
"ref_id": "BIBREF14"
},
{
"start": 508,
"end": 511,
"text": "22]",
"ref_id": "BIBREF21"
},
{
"start": 834,
"end": 837,
"text": "(1)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Can translations obtained from context vectors help CLIR performance?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Are query-dependent context vectors more effective than query-independent context vectors for CLIR? In the balance of this paper, we first describe related work in Section 2. The methods of obtaining translations through context vectors are presented in Section 3. The CLIR evaluation system and evaluation results are presented in Section 4 and Section 5, respectively. We summarize the paper in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In dictionary-based CLIR applications, approaches for dealing with terms with missing translations can be classified into three major categories. The first is a do-nothing approach by simply ignoring the terms with missing translations. The second category includes attempts to generate candidate translations for a subset of unknown terms, such as names and technical terminology, through phonetic translation between different languages (i.e., transliteration) [5, 11, 13, 14, 16, 18, 20] . Such methods generally yield translation pairs with reasonably good accuracy reaching about 70% [18] . Empirical results have shown that the expanded lexicons can significantly improve CLIR system performance [5, 16, 20] . The third category includes approaches for expanding existing bilingual dictionaries by exploring multilingual or bilingual corpora. For example, the \"mix-lingual\" feature of the Web has been exploited for locating translation pairs by searching for the presence of both Chinese and English text in a text window [22] . In work focused on constructing bilingual dictionaries for machine translation, automatic translation lexicons are compiled using either clean aligned parallel corpora [12, 15] or non-parallel comparable corpora [6, 7] . In work with non-parallel corpora, contexts of source language terms and target language terms and a seed translation lexicon are combined to measure the association between the source language terms and potential translation candidates in the target language. The techniques with non-parallel corpora save the expense of constructing large-scale parallel corpora with the tradeoff of lower accuracy, e.g., about 30% accuracy for the top-one candidate [6, 7] . To our knowledge, the usefulness of such lexicons in CLIR systems has not been evaluated.",
"cite_spans": [
{
"start": 463,
"end": 466,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 467,
"end": 470,
"text": "11,",
"ref_id": "BIBREF10"
},
{
"start": 471,
"end": 474,
"text": "13,",
"ref_id": "BIBREF12"
},
{
"start": 475,
"end": 478,
"text": "14,",
"ref_id": null
},
{
"start": 479,
"end": 482,
"text": "16,",
"ref_id": "BIBREF15"
},
{
"start": 483,
"end": 486,
"text": "18,",
"ref_id": "BIBREF17"
},
{
"start": 487,
"end": 490,
"text": "20]",
"ref_id": "BIBREF19"
},
{
"start": 589,
"end": 593,
"text": "[18]",
"ref_id": "BIBREF17"
},
{
"start": 702,
"end": 705,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 706,
"end": 709,
"text": "16,",
"ref_id": "BIBREF15"
},
{
"start": 710,
"end": 713,
"text": "20]",
"ref_id": "BIBREF19"
},
{
"start": 1029,
"end": 1033,
"text": "[22]",
"ref_id": "BIBREF21"
},
{
"start": 1204,
"end": 1208,
"text": "[12,",
"ref_id": "BIBREF11"
},
{
"start": 1209,
"end": 1212,
"text": "15]",
"ref_id": "BIBREF14"
},
{
"start": 1248,
"end": 1251,
"text": "[6,",
"ref_id": "BIBREF5"
},
{
"start": 1252,
"end": 1254,
"text": "7]",
"ref_id": "BIBREF6"
},
{
"start": 1709,
"end": 1712,
"text": "[6,",
"ref_id": "BIBREF5"
},
{
"start": 1713,
"end": 1715,
"text": "7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While missing translations have been addressed in dictionary-based CLIR systems, most of the approaches mentioned above attempt to resolve the problem through dictionary expansion. In this paper, we explore non-lexical approaches and their effectiveness on mitigating the problem of missing translations. Without additional lexicon expansion, and keeping the unknown terms in the source language query, we extract context vectors for these unknown terms and obtain their translations as the surrogate translations for the original query terms. This is motivated by the pre-translation feedback techniques proposed by several previous studies [1, 2] . Pre-translation feedback has been shown to be effective for resolving translation ambiguity, but its effect on recovering the lost meaning due to missing translations has not been empirically evaluated. Our work provides the first empirical results for such an evaluation.",
"cite_spans": [
{
"start": 642,
"end": 645,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 646,
"end": 648,
"text": "2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For a source language term t, we define the context vector of term t as: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Independent Context Vectors",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t C = \u232a \u2329 i t t t t t ,..., , ,",
"eq_num": ", 4 3 2"
}
],
"section": "Query-Independent Context Vectors",
"sec_num": "3.1"
},
{
"text": "trans(t) = <trans(t 1 ), trans(t 2 ), \u2026, trans(t n )>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Independent Context Vectors",
"sec_num": "3.1"
},
{
"text": "Selection of the source language context terms for the unknown term above is only based on the association statistics in an independent source language corpus. It does not consider other terms in the query as context; thus, it is query independent. Using the Japanese-to-English pair as an example, the steps are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Independent Context Vectors",
"sec_num": "3.1"
},
{
"text": "1. For a Japanese term t that is unknown to the bilingual dictionary, extract concordances of term t within a window of P bytes (we used P=200 bytes or 100 Japanese characters) in a Japanese reference corpus. 2. Segment the extracted Japanese concordances into terms, removing stopwords. 3. Select the top N (e.g., N=5) most frequent terms from the concordances to form the context vector for the unknown term t. 4. Translate these selected concordance terms in the context vector into English to form the pseudo-translations of the unknown term t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Independent Context Vectors",
"sec_num": "3.1"
},
{
"text": "Note that, in the translation step (Step 4) of the above procedure, the source language association statistics for selecting the top context terms and frequencies of their translations are not used for ranking or filtering any translations. Rather, we rely on the Cross Language Information Retrieval system's disambiguation function to select the best translations in context of the target language documents [19] .",
"cite_spans": [
{
"start": 410,
"end": 414,
"text": "[19]",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Independent Context Vectors",
"sec_num": "3.1"
},
{
"text": "When query context is considered for constructing context vectors and pseudotranslations, the concordances containing the unknown terms are re-ranked based on the similarity scores between the window concordances and the vector of the known terms in the query. Each window around the unknown term is treated as a document, and the known query terms are used. This is based on the assumption that the top ranked concordances are likely to be more similar to the query; subsequently, the context terms in the context vectors provide better context for the unknown term. Again, using the Japanese-English pair as an example, the steps are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Dependent Context Vectors",
"sec_num": "3.2"
},
{
"text": "1. For a Japanese term t unknown to the bilingual dictionary, extract a window of text of P bytes (we used P=200 bytes or 100 Japanese characters) around every occurrence of term t in a Japanese reference corpus. 2. Segment the Japanese text in each window into terms and remove stopwords. 3. Re-rank the window based on similarity scores between the terms found in the window and the vector of the known query terms. 4. Obtain the top N (e.g., N=5) most frequently occurring terms from the top M (e.g., M=100) ranking windows to form the Japanese context vector for the unknown term t. 5. Translate each term in the Japanese context vector into English to form the pseudo-translations of the unknown term t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Dependent Context Vectors",
"sec_num": "3.2"
},
{
"text": "The similarity scores are based on Dot Product. The main difference between the two versions of context vectors is whether the other known terms in the query are used for ranking the window concordances. Presumably, the other query terms provide a context-sensitive interpretation of the unknown terms. When M is extremely large, however, the query-dependent version should approach the performance of the query-independent version.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Dependent Context Vectors",
"sec_num": "3.2"
},
{
"text": "We illustrate both versions of the context vectors with topic 23 (\u91d1\u5927\u4e2d\u5927\u7d71\u9818\u306e\u5bfe\u30a2\u30b8\u30a2\u653f\u7b56 \"President Kim Dae-Jung's policy toward Asia\") from NTCIR-3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Dependent Context Vectors",
"sec_num": "3.2"
},
{
"text": "First, the topic is segmented into terms, with the stop words removed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Dependent Context Vectors",
"sec_num": "3.2"
},
{
"text": "\u91d1 \u5927 \u4e2d ; \u5927 \u7d71 \u9818 ; \u30a2 \u30b8 \u30a2 ; \u653f \u7b56",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Dependent Context Vectors",
"sec_num": "3.2"
},
{
"text": "Then, the terms are categorized as \"known\" vs. \"unknown\" based on the bilingual dictionary: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Dependent Context Vectors",
"sec_num": "3.2"
},
{
"text": "\u7d4c \u6e08 \u5371 \u6a5f \u514b \u670d \u3078 \uff18 \u9805 \u76ee - - \u97d3 \u56fd \u306e \u91d1 \u5927 \u4e2d \u30fb \u6b21 \u671f \u5927 \u7d71 \u9818 \u3001 \u96c7 \u7528 \u4fc3 \u9032 \u306a \u3069 \u63d0 \u793a \u3010 \u30bd \u30a6 \u30eb \uff13 \uff11 \u65e5 \u5927 \u6fa4 \u6587 \u8b77 \u3011 \u97d3 \u56fd \u306e \u91d1 \u5927 \u4e2d ( \u30ad \u30e0 \u30c7 \u30b8 \u30e5 \u30f3 ) \u6b21 \u671f \u5927 \u7d71 \u9818 \u306f \u304f \u3010 \u30bd \u30a6 \u30eb \uff13 \uff11 \u65e5 \u5927 \u6fa4 \u6587 \u8b77 \u3011 \u97d3 \u56fd \u306e \u91d1 \u5927 \u4e2d ( \u30ad \u30e0 \u30c7 \u30b8 \u30e5 \u30f3 ) \u6b21 \u671f \u5927 \u7d71 \u9818 \u306f \u7d4c \u4e16 \u6e08 \u6c11 \u300d \u306e \u66f8 \u3092 \u8a18 \u8005 \u56e3 \u306b \u898b \u305b \u308b \u91d1 \u5927 \u4e2d \u30fb \u6b21 \u671f \u5927 \u7d71 \u9818 \uff1d \uff21 \uff30 \u2026 \u2026",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Dependent Context Vectors",
"sec_num": "3.2"
},
{
"text": "Next, the text in each window is segmented by a morphological processor into terms with stopwords removed [21] .",
"cite_spans": [
{
"start": 106,
"end": 110,
"text": "[21]",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Dependent Context Vectors",
"sec_num": "3.2"
},
{
"text": "In the query-independent version, we simply select the top 5 most frequently occurring terms in the concordance windows. 3 5 2 7 : \u91d1 3 3 9 9 : \u5927 \u4e2d 3 0 3 5 : \u5927 \u7d71 \u9818 2 6 5 8 : \u97d3 \u56fd 9 0 1 : \u30ad \u30e0 \u30c7 \u30b8 \u30e5 \u30f3 1 Then, the translations of the above context terms are obtained from the bilingual dictionary to provide pseudo-translations for the unknown term",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 237,
"text": "3 5 2 7 : \u91d1 3 3 9 9 : \u5927 \u4e2d 3 0 3 5 : \u5927 \u7d71 \u9818 2 6 5 8 : \u97d3 \u56fd 9 0 1 : \u30ad \u30e0 \u30c7 \u30b8 \u30e5 \u30f3 1",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Query-Dependent Context Vectors",
"sec_num": "3.2"
},
{
"text": ", with the relevant translations in italics: With the query-dependent version, the segmented concordances are ranked by comparing the similarity between the concordance vector and the known term vector. Then we take the 100 top ranking concordances and, from this smaller set, select the top 5 most frequently occurring terms. This time, the top 5 context terms are: could produce a correct transliteration of the name in English, which is not addressed in this paper. Our methods for name transliteration can be found in [18, 20] .",
"cite_spans": [
{
"start": 522,
"end": 526,
"text": "[18,",
"ref_id": "BIBREF17"
},
{
"start": 527,
"end": 530,
"text": "20]",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u91d1 \u5927 \u4e2d",
"sec_num": null
},
{
"text": "\u91d1 \u5927 \u4e2d \u2245 \u91d1 \u21d2 g o l d \u91d1 \u5927 \u4e2d \u2245 \u91d1 \u21d2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u91d1 \u5927 \u4e2d",
"sec_num": null
},
{
"text": "We evaluate the usefulness of the above two methods for obtaining missing translations in our Japanese-to-English retrieval system. Each query term missing from our bilingual dictionary is provided with pseudo-translations using one of the methods. The CLIR system involves the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR System",
"sec_num": "4"
},
{
"text": "First, a Japanese query is parsed into terms 2 with a statistical part of speech tagger and NLP module [21] . Stopwords are removed from query terms. Then query terms are split into a list of known terms, i.e., those that have translations from bilingual dictionaries, and a list of unknown terms, i.e., those that do not have translations from bilingual dictionaries. Without using context vectors for unknown terms, translations of the known terms are looked up in the bilingual dictionaries and our disambiguation module selects the best translation for each term based on coherence measures between translations [19] .",
"cite_spans": [
{
"start": 103,
"end": 107,
"text": "[21]",
"ref_id": "BIBREF20"
},
{
"start": 616,
"end": 620,
"text": "[19]",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR System",
"sec_num": "4"
},
{
"text": "The dictionaries we used for Japanese to English translation are based on edict 3 , which we expanded by adding translations of missing English terms from a core English lexicon by looking them up using BabelFish 4 . Our final dictionary has a total of 210,433 entries. The English corpus used for disambiguating translations is about 703 MB of English text from NTCIR-4 CLIR track 5 . For our source language corpus, we used the Japanese text from NTCIR-3.",
"cite_spans": [
{
"start": 80,
"end": 81,
"text": "3",
"ref_id": "BIBREF2"
},
{
"start": 382,
"end": 383,
"text": "5",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR System",
"sec_num": "4"
},
{
"text": "When context vectors are used to provide translations for terms missing from our dictionary, first, the context vectors for the unknown terms are constructed as described above. Then the same bilingual lexicon is used for translating the context vectors to create a set of pseudo-translations for the unknown term t. We keep all the pseudotranslations as surrogate translations of the unknown terms, just as if they really were the translations we found for the unknown terms in our bilingual dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR System",
"sec_num": "4"
},
{
"text": "We use a corpus-based translation disambiguation method for selecting the best English translations for a Japanese query word. We compute coherence scores of translated sequences created by obtaining all possible combinations of the translations in a source sequence of n query words (e.g., overlapping 3-term windows in our experiments). The coherence score is based on the mutual information score for each pair of translations in the sequence. Then we take the sum of the mutual information scores of all translation pairs as the score of the sequence. Translations with the highest coherence scores are selected as best translations. More details on translation disambiguation can be found in [19] .",
"cite_spans": [
{
"start": 697,
"end": 701,
"text": "[19]",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR System",
"sec_num": "4"
},
{
"text": "Once the best translations are selected, indexing and retrieval of documents in the target language is based on CLARIT [4] . For this work, we use the dot product function for computing similarities between a query and a document: 2 In these experiments, we do not include multiple-word expression such as \u6226 \u4e89 \u72af \u7f6a (war crime) as terms, because translation of most compositional multiple-word expressions can be generally constructed from translations of component words (\u6226\u4e89 and \u72af \u7f6a",
"cite_spans": [
{
"start": 119,
"end": 122,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 231,
"end": 232,
"text": "2",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR System",
"sec_num": "4"
},
{
"text": ") and our empirical evaluation has not shown significant advantages of a separate model of phrase translation. 3 (3)",
"cite_spans": [
{
"start": 111,
"end": 112,
"text": "3",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR System",
"sec_num": "4"
},
{
"text": "where IDF and TF are standard inverse document frequency and term frequency statistics, respectively. IDF(t) is computed with the target corpus for retrieval. The coefficient C(t) is an \"importance coefficient\", which can be modified either manually by the user or automatically by the system (e.g., updated during feedback). For query expansion through (pseudo-) relevance feedback, we use pseudorelevance feedback based on high-scoring sub-documents to augment the queries. That is, after retrieving some sub-documents for a given topic from the target corpus, we take a set of top ranked sub-documents, regarding them as relevant sub-documents to the query, and extract terms from these sub-documents. We use a modified Rocchio formula for extracting and ranking terms for expansion:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR System",
"sec_num": "4"
},
{
"text": "NumDoc DocSet D t D TF t IDF t Rocchio \u2211 \u2208 \u00d7 = ) ( ) ( ) ( (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR System",
"sec_num": "4"
},
{
"text": "where IDF(t) is the Inverse Document Frequency of term t in reference database, NumDoc the number of sub-documents in the given set of sub-documents, and TF D (t) the term frequency score for term t in sub-document D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR System",
"sec_num": "4"
},
{
"text": "Once terms for expansion are extracted and ranked, they are combined with the original terms in the query to form an expanded query. exp Q Q k new Q + \u00d7 = (5) in which Q new , Q orig , Q exp stand for the new expanded query, the original query, and terms extracted for expansion, respectively. In the experiments reported in Section 5, we assign a constant weight to all expansion terms (e.g., 0.5)",
"cite_spans": [
{
"start": 155,
"end": 158,
"text": "(5)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR System",
"sec_num": "4"
},
{
"text": "For evaluation, we used NTCIR-3 Japanese topics 6 . Of the 32 topics that have relevance judgments, our system identifies unknown terms as terms not present in our expanded Japanese-to-English dictionary described above. The evaluation of the effect of using context vectors is based only on the limited number of topics that contain these unknown terms. The target corpus is the NTCIR-3 English corpus, which contains 22,927 documents. The statistics about the unknown terms for short (i.e., the title field only), medium (i.e., the description field only), and long (i.e., the description and the narrative fields) queries are summarized below. The total number of unknown terms that we treated with context vectors was 83 (i.e., 6+15+62). For evaluation, we used the mean average precision and recall for the top 1000 documents and also precision@30, as defined in TREC retrieval evaluations.",
"cite_spans": [
{
"start": 48,
"end": 49,
"text": "6",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "5.1"
},
{
"text": "We compare three types of runs, both with and without post-translation pseudorelevance feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Short",
"sec_num": null
},
{
"text": "\u2022 Runs without context vectors (baselines)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Short",
"sec_num": null
},
{
"text": "\u2022 Runs with query-dependent context vectors \u2022 Runs with query-independent context vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Short",
"sec_num": null
},
{
"text": "Tables 1-4 present the performance statistics for the above runs. For the runs with translation disambiguation (Tables 1-2) , using context vectors improved overall recall, average precision, and precision at 30 documents for short queries. Context vectors moderately improved recall, average precision (except for the query independent version), and precision at 30 documents for medium length queries.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 123,
"text": "(Tables 1-2)",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Empirical Observations",
"sec_num": "5.2"
},
{
"text": "For the long queries, we do not observe any advantages of using either querydependent or query-independent versions of the context vectors. This is probably because the other known terms in long queries provide adequate context for recovering the loss of missing translation of the unknown terms. Adding candidate translations from context vectors only makes the query more ambiguous and inexact.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Observations",
"sec_num": "5.2"
},
{
"text": "When all translations were kept (Tables 3-4) , i.e., when no translation disambiguation was performed, we only see overall improvement in recall for short and mediumlength queries. We do not see any advantage of using context vectors for improving average precision or precision at 30 documents. For longer queries, the performance statistics were overall worse than the baseline. As pointed out in [10] , when all translations are kept without proper weighting of the translations, some terms get more favorable treatment than other terms simply because they contain more translations. So, in models where all translations are kept, proper weighting schemes should be developed, e.g., as suggested in related research [17] . ",
"cite_spans": [
{
"start": 399,
"end": 403,
"text": "[10]",
"ref_id": "BIBREF9"
},
{
"start": 719,
"end": 723,
"text": "[17]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 32,
"end": 44,
"text": "(Tables 3-4)",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Empirical Observations",
"sec_num": "5.2"
},
{
"text": "We have used context vectors to obtain surrogate translations for terms that appear in queries but that are absent from bilingual dictionaries. We have described two types of context vectors: a query-independent version and a query-dependent version. In the empirical evaluation, we have examined the interaction between the use of context vectors with other factors such as translation disambiguation, pseudo-relevance feedback, and query lengths. The empirical findings suggest that using query-dependent context vectors together with post-translation pseudo-relevance feedback and translation disambiguation can help to overcome the meaning loss due to missing translations for short queries. For longer queries, the longer context in the query seems to make the use of context vectors unnecessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary and Future Work",
"sec_num": "6"
},
{
"text": "The paper presents only our first set on experiments of using context to recover meaning loss due to missing translations. In our future work, we will verify the observations with other topic sets and database sources; verify the observations with other language pairs, e.g., Chinese-to-English retrieval; and experiment with different parameter settings such as context window size, methods for context term selection, different ways of ranking context terms, and the use of the context term ranking in combination with disambiguation for translation selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary and Future Work",
"sec_num": "6"
},
{
"text": "http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings3/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Topics 4, 23, 26, 27, 33. 8 Topics 4, 5, 7, 13, 14, 20, 23, 26, 27, 28, 29, 31, 33, 38. 9 Topics 2, 4, 5, 7, 9, 13, 14, 18, 19, 20, 21, 23, 24, 26, 27, 28, 29, 31, 33, 37, 38, 42, 43, 50.10 The average number of unique unknown terms is 1.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Dictionary Methods for Cross-Language Information Retrieval",
"authors": [
{
"first": "L",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of Database and Expert Systems Applications",
"volume": "",
"issue": "",
"pages": "791--801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ballesteros, L., and Croft, B.: Dictionary Methods for Cross-Language Information Re- trieval. In Proceedings of Database and Expert Systems Applications (1996) 791-801.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Resolving Ambiguity for Cross-Language Retrieval",
"authors": [
{
"first": "L",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "64--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ballesteros, L., Croft, W. B.: Resolving Ambiguity for Cross-Language Retrieval. In Proceedings of SIGIR (1998) 64-71.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Context Vector Model for Information Retrieval",
"authors": [
{
"first": "H",
"middle": [],
"last": "Billhardt",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Borrajo",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Maojo",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of the American Society for Information Science and Technology",
"volume": "53",
"issue": "3",
"pages": "236--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Billhardt, H., Borrajo, D., Maojo, V.: A Context Vector Model for Information Retrieval. Journal of the American Society for Information Science and Technology, 53(3) (2002) 236-249.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "CLARIT-TREC Experiments. Information Processing and Management",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Evans",
"suffix": ""
},
{
"first": "R",
"middle": [
"G"
],
"last": "Lefferts",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "31",
"issue": "",
"pages": "385--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evans, D. A., Lefferts, R. G.: CLARIT-TREC Experiments. Information Processing and Management, 31(3) (1995) 385-395.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Japanese/English Cross-Language Information Retrieval: Exploration of Query Translation and Transliteration",
"authors": [
{
"first": "A",
"middle": [],
"last": "Fujii",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ishikawa",
"suffix": ""
}
],
"year": 2001,
"venue": "Computer and the Humanities",
"volume": "35",
"issue": "4",
"pages": "389--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fujii, A., Ishikawa, T.: Japanese/English Cross-Language Information Retrieval: Explora- tion of Query Translation and Transliteration. Computer and the Humanities, 35(4) (2001) 389-420.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Statistical View on Bilingual Lexicon Extraction: From Parallel Corpora to Non-parallel Corpora",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of AMTA",
"volume": "",
"issue": "",
"pages": "1--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fung, P.: A Statistical View on Bilingual Lexicon Extraction: From Parallel Corpora to Non-parallel Corpora. In Proceedings of AMTA (1998) 1-17.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An IR Approach for Translating New Words from Nonparallel, Comparable Texts",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "L",
"middle": [
"Y"
],
"last": "Yee",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "414--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fung, P., Yee, L. Y.: An IR Approach for Translating New Words from Nonparallel, Comparable Texts. In Proceedings of COLING-ACL (1998) 414-420.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Experiments in Multilingual Information Retrieval",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Hull",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hull, D. A., Grefenstette, G.: Experiments in Multilingual Information Retrieval. In Pro- ceedings of the 19th Annual International ACM SIGIR Conference on Research and De- velopment in Information Retrieval (1996) 49-57.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Evaluating the Adequacy of a Multilingual Transfer Dictionary for Cross Language Information Retrieval",
"authors": [
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "755--758",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette, G.: Evaluating the Adequacy of a Multilingual Transfer Dictionary for Cross Language Information Retrieval. In Proceedings of LREC (1998) 755-758.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Problem of Cross Language Information Retrieval",
"authors": [
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1998,
"venue": "Cross Language Information Retrieval",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette, G.: The Problem of Cross Language Information Retrieval. In G. Grefen- stette, ed., Cross Language Information Retrieval, Kluwer Academic Publishers (1998) 1-9.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Mining the Web to Create a Language Model for Mapping between English Names and Phrases and Japanese",
"authors": [
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Evans",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence",
"volume": "",
"issue": "",
"pages": "110--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette, G., Qu, Y., Evans, D. A.: Mining the Web to Create a Language Model for Mapping between English Names and Phrases and Japanese. In Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence (2004) 110-116.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Robust Bilingual Word Alignment for Machine Aided Translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ido",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido, D., Church, K., Gale, W. A.: Robust Bilingual Word Alignment for Machine Aided Translation. In Proceedings of the Workshop on Very Large Corpora: Academic and In- dustrial Perspectives (1993) 1-8.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatic Identification and Backtransliteration of Foreign Words for Information Retrieval",
"authors": [
{
"first": "K",
"middle": [
"S"
],
"last": "Jeong",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Myaeng",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [
"S"
],
"last": "Choi",
"suffix": ""
}
],
"year": 1999,
"venue": "Information Processing and Management",
"volume": "35",
"issue": "4",
"pages": "523--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeong, K. S., Myaeng, S, Lee, J. S., Choi, K. S.: Automatic Identification and Back- transliteration of Foreign Words for Information Retrieval. Information Processing and Management, 35(4) (1999) 523-540.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Building an MT dictionary from Parallel Texts Based on Linguistic and Statistical Information",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kumano",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hirakawa",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15 th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "76--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumano, A., Hirakawa, H.: Building an MT dictionary from Parallel Texts Based on Lin- guistic and Statistical Information. In Proceedings of the 15 th International Conference on Computational Linguistics (COLING) (1994) 76-81.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Generating Phonetic Cognates to Handel Named Entities in English-Chinese Cross-Language Spoken Document Retrieval",
"authors": [
{
"first": "H",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc of the Automatic Speech Recognition and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng, H., Lo, W., Chen, B., Tang, K.: Generating Phonetic Cognates to Handel Named Entities in English-Chinese Cross-Language Spoken Document Retrieval. In Proc of the Automatic Speech Recognition and Understanding Workshop (ASRU 2001) (2001).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Applying Query Structuring in Cross-Language Retrieval",
"authors": [
{
"first": "A",
"middle": [],
"last": "Pirkola",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Puolamaki",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Jarvelin",
"suffix": ""
}
],
"year": 2003,
"venue": "Information Management and Processing",
"volume": "39",
"issue": "3",
"pages": "391--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pirkola, A., Puolamaki, D., Jarvelin, K.: Applying Query Structuring in Cross-Language Retrieval. Information Management and Processing: An International Journal. Vol 39 (3) (2003) 391-402.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Finding Ideographic Representations of Japanese Names in Latin Scripts via Language Identification and Corpus Validation",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "183--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qu, Y., Grefenstette, G.: Finding Ideographic Representations of Japanese Names in Latin Scripts via Language Identification and Corpus Validation. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (2004) 183-190.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Resolving Translation Ambiguity Using Monolingual Corpora",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Evans",
"suffix": ""
}
],
"year": 2002,
"venue": "Advances in Cross-Language Information Retrieval: Third Workshop of the Cross-Language Evaluation Forum",
"volume": "2785",
"issue": "",
"pages": "223--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qu, Y., Grefenstette, G., Evans, D. A.: Resolving Translation Ambiguity Using Monolin- gual Corpora. In Peters, C., Braschler, M., Gonzalo, J. (eds): Advances in Cross-Language Information Retrieval: Third Workshop of the Cross-Language Evaluation Forum, CLEF 2002, Rome, Italy, September 19-20, 2002. Lecture Notes in Computer Science, Vol 2785. Springer (2003) 223-241.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A: Automatic Transliteration for Japanese-to-English Text Retrieval",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Evans",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "353--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qu, Y., Grefenstette, G., Evans, D. A: Automatic Transliteration for Japanese-to-English Text Retrieval. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (2003) 353-360.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Towards Effective Strategies for Monolingual and Bilingual Information Retrieval: Lessons Learned from NTCIR-4",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Hull",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Evans",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ishikawa",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nara",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ueda",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Noda",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Arita",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Funakoshi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Matsuda",
"suffix": ""
}
],
"year": null,
"venue": "ACM Transactions on Asian Language Information Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qu, Y., Hull, D. A., Grefenstette, G., Evans, D. A., Ishikawa, M., Nara, S., Ueda, T., Noda, D., Arita, K., Funakoshi, Y., Matsuda, H.: Towards Effective Strategies for Mono- lingual and Bilingual Information Retrieval: Lessons Learned from NTCIR-4. ACM Transactions on Asian Language Information Processing. (to appear)",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "P: Using the web for automated translation extraction in cross-language information retrieval",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vines",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "162--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Y., Vines, P: Using the web for automated translation extraction in cross-language information retrieval. In Proceedings of the 27th Annual International ACM SIGIR Con- ference on Research and Development in Information Retrieval (2004) 162-169.",
"links": null
}
},
"ref_entries": {
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">1 3 9 1 : \u5927 \u7d71 \u9818 1 3 8 2 : \u91d1 1 3 3 5 : \u5927 \u4e2d 1 0 4 5 : \u97d3 \u56fd 3 7 9 : \u30ad \u30e0 \u30c7 \u30b8 \u30e5 \u30f3</td></tr><tr><td>\u91d1 \u5927 \u4e2d \u91d1 \u5927 \u4e2d \u91d1 \u5927 \u4e2d \u91d1 \u5927 \u4e2d \u91d1 \u5927 \u4e2d \u91d1 \u5927 \u4e2d \u91d1 \u5927 \u4e2d \u91d1 \u5927 \u4e2d \u91d1 \u5927 \u4e2d</td><td colspan=\"2\">\u2245 \u2245 \u2245 \u2245 \u2245 \u2245 \u2245 \u5927 \u5927 \u7d71 \u9818 \u5927 \u7d71 \u9818 \u5927 \u7d71 \u9818 \u91d1 \u21d2 g \u21d2 \u21d2 \u21d2 o l \u91d1 \u21d2 m e t \u91d1 \u21d2 m o n \u4e2d \u21d2 \u2205 \u2245 \u97d3 \u56fd \u21d2 k \u2245 \u30ad \u30e0 \u30c7 \u30b8 \u30e5 \u30f3 c h i e f p r e s i d e x e c u t i v e e n t p r e s i d e n t i a l d a l e y o r e a \u21d2 \u2205</td></tr><tr><td colspan=\"2\">1 Romanization of the katakana name</td><td>\u30ad \u30e0 \u30c7 \u30b8 \u30e5 \u30f3</td></tr></table>",
"text": "In this example, the context vectors from both versions are the same, even though the terms are ranked in different orders. The pseudo-translations from the context vectors are:",
"html": null,
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">sim</td><td>(</td><td colspan=\"2\">P</td><td>,</td><td colspan=\"3\">D</td><td>)</td><td colspan=\"2\">=</td><td>t</td><td colspan=\"3\">P \u2211 \u2229 \u2208</td><td colspan=\"3\">W D</td><td>P</td><td>(</td><td>t</td><td>)</td><td>\u2022</td><td>W</td><td>D</td><td>(</td><td>t</td><td>)</td><td>.</td><td>(1)</td></tr><tr><td>where W P (t) follows:</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"3\">W</td><td>D</td><td colspan=\"2\">(</td><td>t</td><td>)</td><td colspan=\"2\">=</td><td colspan=\"3\">TF</td><td>D</td><td colspan=\"2\">(</td><td>t</td><td>)</td><td colspan=\"2\">\u2022</td><td>IDF</td><td>(</td><td>t</td><td>)</td><td>.</td><td>(2)</td></tr><tr><td>W</td><td>P</td><td>(</td><td>t</td><td>)</td><td/><td>=</td><td/><td colspan=\"3\">C</td><td>(</td><td>t</td><td>)</td><td colspan=\"2\">\u2022</td><td colspan=\"4\">TF</td><td colspan=\"2\">P</td><td>(</td><td>t</td><td>)</td><td>\u2022</td><td>IDF</td><td>(</td><td>t</td><td>)</td><td>.</td></tr></table>",
"text": "http://www.csse.monash.edu.au/~jwb/j_edict.html4 http://world.altavista.com/ 5 http://research.nii.ac.jp/ntcir/ntcir-ws4/clir/index.html is the weight associated with the query term t and W D (t) is the weight associated with the term t in the document D. The two weights are computed as",
"html": null,
"num": null
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td>No Feedback</td><td>Recall</td><td>Avg. Precision</td><td>Prec@30</td></tr><tr><td>Short</td><td/><td/><td/></tr><tr><td>Baseline</td><td>28/112</td><td>0.1181</td><td>0.05</td></tr><tr><td>With context vectors</td><td>43/112</td><td>0.1295</td><td>0.0667</td></tr><tr><td>(query independent)</td><td>(+53.6%)</td><td>(+9.7%)</td><td>(+33.4%)</td></tr><tr><td>With context vectors</td><td>43/112</td><td>0.1573</td><td>0.0667</td></tr><tr><td>(query dependent)</td><td>(+53.6%)</td><td>(+33.2%)</td><td>(+33.4)</td></tr><tr><td>Medium</td><td/><td/><td/></tr><tr><td>Baseline</td><td>113/248</td><td>0.1753</td><td>0.1231</td></tr><tr><td>With context vectors</td><td>114/248</td><td>0.1588</td><td>0.1256</td></tr><tr><td>(query independent)</td><td>(+0.9%)</td><td>(-9.5%)</td><td>(+2.0%)</td></tr><tr><td>With context vectors</td><td>115/248</td><td>0.1838</td><td>0.1282</td></tr><tr><td>(query dependent)</td><td>(+1.8%)</td><td>(+4.8%)</td><td>(+4.1%)</td></tr><tr><td>Long</td><td/><td/><td/></tr><tr><td>Baseline</td><td>305/598</td><td>0.1901</td><td>0.1264</td></tr><tr><td>With context vectors</td><td>308/598</td><td>0.1964</td><td>0.1125</td></tr><tr><td>(query independent)</td><td>(+1.0%)</td><td>(+3.3%)</td><td>(-11.0%)</td></tr><tr><td>With context vectors</td><td>298/598</td><td>0.1883</td><td>0.1139</td></tr><tr><td>(query dependent)</td><td>(-2.3%)</td><td>(-0.9%)</td><td>(-9.9%)</td></tr></table>",
"text": "Performance statistics for short, medium, and long queries. Translations were disambiguated; no feedback was used. Percentages show change over the baseline runs.",
"html": null,
"num": null
},
"TABREF8": {
"type_str": "table",
"content": "<table><tr><td>With Feedback</td><td>Recall</td><td>Avg. Precision</td><td>Prec@30</td></tr><tr><td>Short</td><td/><td/><td/></tr><tr><td>Baseline</td><td>15/112</td><td>0.1863</td><td>0.0417</td></tr><tr><td>With context vectors</td><td>40/112</td><td>0.1812</td><td>0.0417</td></tr><tr><td>(query independent)</td><td>(+166.7%)</td><td>(-2.7%)</td><td>(+0.0%)</td></tr><tr><td>With context vectors</td><td>40/112</td><td>0.1942</td><td>0.0417</td></tr><tr><td>(query dependent)</td><td>(+166.7%)</td><td>(+4.2%)</td><td>(+0.0%)</td></tr><tr><td>Medium</td><td/><td/><td/></tr><tr><td>Baseline</td><td>139/248</td><td>0.286</td><td>0.1513</td></tr><tr><td>With context vectors</td><td>137</td><td>0.2942</td><td>0.1538</td></tr><tr><td>(query independent)</td><td>(-1.4%)</td><td>(+2.9%)</td><td>(+1.7%)</td></tr><tr><td>With context vectors</td><td>141</td><td>0.3173</td><td>0.159</td></tr><tr><td>(query dependent)</td><td>(+1.4%)</td><td>(+10.9%)</td><td>(+5.1%)</td></tr><tr><td>Long</td><td/><td/><td/></tr><tr><td>Baseline</td><td>341/598</td><td>0.2575</td><td>0.1681</td></tr><tr><td>With context vectors</td><td>347/598</td><td>0.2598</td><td>0.1681</td></tr><tr><td>(query independent)</td><td>(+1.8%)</td><td>(+0.9%)</td><td>(+0.0%)</td></tr><tr><td>With context vectors</td><td>340/598</td><td>0.2567</td><td>0.1639</td></tr><tr><td>(query dependent)</td><td>(-0.3%)</td><td>(-0.3%)</td><td>(-2.5%)</td></tr></table>",
"text": "Performance statistics for short, medium, and long queries. Translations were disambiguated; for pseudo-relevance feedback, the top 30 terms from top 20 subdocuments were selected based on the Rocchio formula. Percentages show change over the baseline runs.",
"html": null,
"num": null
},
"TABREF9": {
"type_str": "table",
"content": "<table><tr><td>No Feedback</td><td>Recall</td><td>Avg. Precision</td><td>Prec@30</td></tr><tr><td/><td>Short</td><td/><td/></tr><tr><td>Baseline</td><td>33/112</td><td>0.1032</td><td>0.0417</td></tr><tr><td>With context vectors</td><td>57/112</td><td>0.0465</td><td>0.05</td></tr><tr><td>(query independent)</td><td>(+72.7%)</td><td>(-54.9%)</td><td>(+19.9%)</td></tr><tr><td>With context vectors</td><td>41/112</td><td>0.1045</td><td>0.0417</td></tr><tr><td>(query dependent)</td><td>(+24.2%)</td><td>(-0.2%)</td><td>(+0%)</td></tr><tr><td/><td>Medium</td><td/><td/></tr><tr><td>Baseline</td><td>113/248</td><td>0.1838</td><td>0.0846</td></tr><tr><td>With context vectors</td><td>136/248</td><td>0.1616</td><td>0.0769</td></tr><tr><td>(query independent)</td><td>(+20.4%)</td><td>(-12.1%)</td><td>(-9.1%)</td></tr><tr><td>With context vectors</td><td>122/248</td><td>0.2013</td><td>0.0769</td></tr><tr><td>(query dependent)</td><td>(+8.0%)</td><td>(+9.5%)</td><td>(-9.1%)</td></tr><tr><td/><td>Long</td><td/><td/></tr><tr><td>Baseline</td><td>283</td><td>0.1779</td><td>0.0944</td></tr><tr><td>With context vectors</td><td>295/598</td><td>0.163</td><td>0.0917</td></tr><tr><td>(query independent)</td><td>(+4.2%)</td><td>(-8.4%)</td><td>(-2.9%)</td></tr><tr><td>With context vectors</td><td>278/598</td><td>0.1566</td><td>0.0931</td></tr><tr><td>(query dependent)</td><td>(-1.8%)</td><td>(-12.0%)</td><td>(-1.4%)</td></tr></table>",
"text": "Performance statistics for short, medium, and long queries. All translations were kept for retrieval; pseudo-relevance feedback was not used. Percentages show change over the baseline runs.",
"html": null,
"num": null
},
"TABREF10": {
"type_str": "table",
"content": "<table><tr><td>With Feedback</td><td>Recall</td><td>Avg. Precision</td><td>Prec@30</td></tr><tr><td/><td>Short</td><td/><td/></tr><tr><td>Baseline</td><td>40/112</td><td>0.1733</td><td>0.0417</td></tr><tr><td>With context vectors</td><td>69/112</td><td>0.1662</td><td>0.1583</td></tr><tr><td>(query independent)</td><td>(+72.5%)</td><td>(-4.1%)</td><td>(+279.6%)</td></tr><tr><td>With context vectors</td><td>44/112</td><td>0.1726</td><td>0.0417</td></tr><tr><td>(query dependent)</td><td>(+10.0%)</td><td>(-0.4%)</td><td>(+0.0%)</td></tr><tr><td/><td>Medium</td><td/><td/></tr><tr><td>Baseline</td><td>135/248</td><td>0.2344</td><td>0.1256</td></tr><tr><td>With context vectors</td><td>161/248</td><td>0.2332</td><td>0.1333</td></tr><tr><td>(query independent)</td><td>(+19.3%)</td><td>(-0.5%)</td><td>(+6.1%)</td></tr><tr><td>With context vectors</td><td>139/248</td><td>0.2637</td><td>0.1154</td></tr><tr><td>(query dependent)</td><td>(+3.0%)</td><td>(+12.5%)</td><td>(-8.1%)</td></tr><tr><td/><td>Long</td><td/><td/></tr><tr><td>Baseline</td><td>344/598</td><td>0.2469</td><td>0.1444</td></tr><tr><td>With context vectors</td><td>348/598</td><td>0.2336</td><td>0.1333</td></tr><tr><td>(query independent)</td><td>(+1.2%)</td><td>(-5.4%)</td><td>(-7.7%)</td></tr><tr><td>With context vectors</td><td>319/598</td><td>0.2033</td><td>0.1167</td></tr><tr><td>(query dependent)</td><td>(-7.3%)</td><td>(-17.7%)</td><td>(-19.2%)</td></tr></table>",
"text": "Performance statistics for short, medium, and long queries. All translations were kept for retrieval; for pseudo-relevance feedback, the top 30 terms from top 20 subdocuments were selected base on the Rocchio formula. Percentages show change over the baseline runs.",
"html": null,
"num": null
}
}
}
}