{ "paper_id": "O12-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:02:47.221546Z" }, "title": "An Improvement in Cross-Language Document Retrieval Based on Statistical Models", "authors": [ { "first": "Long-Yue", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Macau", "location": {} }, "email": "" }, { "first": "Derek", "middle": [ "F" ], "last": "Wong", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Macau", "location": {} }, "email": "derekfw@umac.mo" }, { "first": "Lidia", "middle": [ "S" ], "last": "Chao", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Macau", "location": {} }, "email": "lidiasc@umac.mo" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a proposed method integrated with three statistical models including Translation model, Query generation model and Document retrieval model for cross-language document retrieval. Given a certain document in the source language, it will be translated into the target language of statistical machine translation model. The query generation model then selects the most relevant words in the translated version of the document as a query. Finally, all the documents in the target language are scored by the document searching model, which mainly computes the similarities between query and document. This method can efficiently solve the problem of translation ambiguity and query expansion for disambiguation, which are critical in Cross-Language Information Retrieval. In addition, the proposed model has been extensively evaluated to the retrieval of documents that: 1) texts are long which, as a result, may cause the model to over generate the queries; and 2) texts are of similar contents under the same topic which is hard to be distinguished by the retrieval model. After comparing different strategies, the experimental results show a significant performance of the method with the average precision close to 100%. It is of a great significance to both cross-language searching on the Internet and the parallel corpus producing for statistical machine translation systems.", "pdf_parse": { "paper_id": "O12-1015", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a proposed method integrated with three statistical models including Translation model, Query generation model and Document retrieval model for cross-language document retrieval. Given a certain document in the source language, it will be translated into the target language of statistical machine translation model. The query generation model then selects the most relevant words in the translated version of the document as a query. Finally, all the documents in the target language are scored by the document searching model, which mainly computes the similarities between query and document. This method can efficiently solve the problem of translation ambiguity and query expansion for disambiguation, which are critical in Cross-Language Information Retrieval. In addition, the proposed model has been extensively evaluated to the retrieval of documents that: 1) texts are long which, as a result, may cause the model to over generate the queries; and 2) texts are of similar contents under the same topic which is hard to be distinguished by the retrieval model. After comparing different strategies, the experimental results show a significant performance of the method with the average precision close to 100%. It is of a great significance to both cross-language searching on the Internet and the parallel corpus producing for statistical machine translation systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With the flourishing development of the Internet, the amount of information from a variety of domains is rising dramatically. Although the researchers have done a lot to develop high performance and effective monolingual Information Retrieval (IR), the diversity of information source and the explosive growth of information in different languages drove a great need for IR systems that could cross language boundaries [1] .", "cite_spans": [ { "start": 419, "end": 422, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Cross-Language Information Retrieval (CLIR) has become more important for people to access the information resources written in various languages. Besides, it is of a great significance to alignment documents in multiple languages for Statistical Machine Translation (SMT) systems, of which quality is heavily dependent upon the amount of parallel sentences used in constructing the system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we focus on the problems of translation ambiguity, query generation and searching score which are keys to the retrieval performance. First of all, in order to increase the probability that the best translation can be selected from multiple ones, which occurs in the target documents, the context and the most likely probability of the whole sentence should be considered. So we apply document translation approach using SMT model instead of query translation, although the latter one may require fewer computational resources. After the source documents are translated into the target language, the problem is transformed from bilingual environment to monolingual one, where conventional IR techniques can be used for document retrieval. Secondly, some terms in a certain document will be selected as query, which can distinguish the document from others. However, some of the words occur too frequently to be useful, which cannot distinguish target documents. This mostly includes two types, one is that the word frequency is high both in the current and the whole document set, which is usually classified as stop word; the other is that the frequency is moderate in several documents (not the whole document set). This type of words gives low discrimination power to the document, and is known as low discrimination word. Thus, the query generation model should filter the words which are of these types and pick the words that occur more frequently in a certain document while less frequently in the whole document set. Finally, the document searching model scores each document according to the similarity between generated query and the document. This model should give a higher mark to the target document which covers the most relevant words in the given query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There are two cases to be considered when we investigated the method. In one case, both the source and target documents are long text, which are hard to extract exact query from the large amounts of information. In the other case, the contents of the documents are very similar, which are not easy to distinguish for retrieval. The results of experiments reveal that the proposed model shows a very good performance in dealing with both cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The paper is organized as follows. The related works are reviewed and discussed in Section 2. The proposed CLIR approach based on statistical models is described in Section 3. The resources and configurations of experiments for evaluating the system are detailed in Section 4. Results, discussion and comparison between different strategies are given in Section 5 followed by a conclusion and future improvements to end the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "CLIR is the circumstance in which a user tries to search a set of documents written in one Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012) language for a query in another language [2] . The issues of CLIR have been discussed from different perspectives for several decades. In this section, we briefly describe some related methods.", "cite_spans": [ { "start": 186, "end": 200, "text": "(ROCLING 2012)", "ref_id": null }, { "start": 242, "end": 245, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "On the matching strategies for CLIR, query translation is most widely used method due to its tractability. However, it is relatively difficult to resolve the problem of term ambiguity because \"queries are often short and short queries provide little context for disambiguation\" [3] . Hence, some researchers have used document translation method as the opposite strategies to improve translation quality, since more varied context within each document is available for translation [4, 5] .", "cite_spans": [ { "start": 278, "end": 281, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 481, "end": 484, "text": "[4,", "ref_id": "BIBREF3" }, { "start": 485, "end": 487, "text": "5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "However, another problem introduced based on this approach is word (term) disambiguation, because a word may have multiple possible translations [3] . Significant efforts have been devoted to this problem. Davis and Ogden [6] applied a part-of-speech (POS) method which requires POS tagging software for both languages. Marcello et al. presented a novel statistical method to score and rank the target documents by integrating probabilities computed by query-translation model and query-document model [7] . However, this approach cannot aim at describing how users actually create queries which have a key effect on the retrieval performance. Due to the availability of parallel corpora in multiple languages, some authors have tried to extract beneficial information for CLIR by using SMT techniques. S\u00e1nchez-Mart\u00ednez et al. [8] applied SMT technology to generate and translate queries in order to retrieve long documents.", "cite_spans": [ { "start": 145, "end": 148, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 222, "end": 225, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 502, "end": 505, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 827, "end": 830, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Some researchers like Marcello, S\u00e1nchez-Mart\u00ednez et al. have attempted to estimate translation probability from a parallel corpus according to a well-known algorithm developed by IBM [9] . The algorithm can automatically generate a bilingual term list with a set of probabilities that a term is translated into equivalents in another language from a set of sentence alignments included in a parallel corpus. The IBM Model 1 is the simplest among the five models and often used for CLIR. The fundamental idea of the Model 1 is to estimate each translation probability so that the probability represented is maximized", "cite_spans": [ { "start": 183, "end": 186, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "\u00a6 l i i j m j m s t P l s t P 0 1 ) | ( ) 1 ( ) | ( H (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "where t is a sequence of terms t 1 , \u2026, t m in the target language, s is a sequence of terms s 1 , \u2026, s l in the source language, P(t j |s i ) is the translation probability, and \u0190 is a parameter (\u0190 =P(m|e)), where e is target language and m is the length of source language). Eq. (1) tries to balance the probability of translation, and the query selection, in which problem still exists: it tends to select the terms consisting of more words as query because of its less frequency, while cutting the length of terms may affect the quality of translation. Besides, the IBM model 1 only proposes translations word-by-word and ignores the context words in the query. This observation suggests that a disambiguation process can be added to select the correct translation words [3] . However, in our method, the conflict can be resolved through contexts.", "cite_spans": [ { "start": 775, "end": 778, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "The approach relies on three models: translation model which generates the most probable translation of source documents; query generation model which determines what words in a document might be more favorable to use in a query; and document searching model, which evaluates the similarity between a given query and each document in the target document set. The workflow of the approach for CLIR is shown in Fig. 1 . ", "cite_spans": [], "ref_spans": [ { "start": 409, "end": 415, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Proposed Model", "sec_num": "3." }, { "text": "Currently, the good performing statistical machine translation systems are based on phrase-based models which translate small word sequences at a time. Generally speaking, translation model is common for contiguous sequences of words to translate as a whole. Phrasal translation is certainly significant for CLIR [10] , as stated in Section 1. It can do a good job in dealing with term disambiguation.", "cite_spans": [ { "start": 313, "end": 317, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.1." }, { "text": "In this work, documents are translated using the translation model provided by Moses, where the log-linear model is considered for training the phrase-based system models [11] , and is represented as:", "cite_spans": [ { "start": 171, "end": 175, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.1." }, { "text": "\u00a6 \u00a6 \u00a6 I e M m J I m m M m J I m m J I f e h f e h f e p 1 ' 1 1 1 1 1 1 1 1 ) ) , ' ( exp( ) ) , ( exp( ) | ( O O (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.1." }, { "text": "where h m indicates a set of different models, \u03bb m means the scaling factors, and the denominator can be ignored during the maximization process. The most important models in Eq. (2) normally are phrase-based models which are carried out in source to target and target to source directions. The source document will maximize the equation to generate the translation including the words most likely to occur in the target document set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.1." }, { "text": "After translating the source document into the target language of the translation model, the system should select a certain amount of words as a query for searching instead of using the whole translated text. It is for two reasons, one is computational cost, and the other is that the unimportant words will degrade the similarity score. This is also the reason why it often responses nothing from the search engines on the Internet when we choose a whole text as a query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2." }, { "text": "In this paper, we apply a classical algorithm which is commonly used by the search engines as a central tool in scoring and ranking relevance of a document given a user query. Term Frequency-Inverse Document Frequency (TF-IDF) calculates the values for each word in a document through an inverse proportion of the frequency of the word in a particular document to the percentage of documents where the word appears [12] . Given a document collection D, a word w, and an individual document d \u03f5 D, we calculate ) , (", "cite_spans": [ { "start": 415, "end": 419, "text": "[12]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "| | log ) , ( ) , ( D w f D d w f d w P u", "eq_num": "(3)" } ], "section": "Query Generation Model", "sec_num": "3.2." }, { "text": "where f(w, d) denotes the number of times w that appears in d, |D| is the size of the corpus, and f(w,D) indicates the number of documents in which w appears in D [13] .", "cite_spans": [ { "start": 163, "end": 167, "text": "[13]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2." }, { "text": "In implementation, if w is an Out-of-Vocabulary term (OOV), the denominator f(w,D) becomes zero, and will be problematic (divided by zero). Thus, our model makes log (|D|/ f(w,D))=1 (IDF=1) when this situation occurs. Additionally, a list of stop-words in the target language are also used in query generation to remove the words which are high frequency but less discrimination power. Numbers are also treated as useful terms in our model, which also play an important role in distinguishing the documents. Finally, after evaluating and ranking all the words in a document by their scores, we take a portion of the (n-best) words for constructing the query and are guided by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2." }, { "text": "] [ d percent q Len Size u O (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2." }, { "text": "Size q is the number of terms. \u03bb percent is the percentage and is manually defined, which determines the Size q according to Len d , the length of the document. The model uses the first Size q -th words as the query. In another word, the larger document, the more words are selected as the query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2." }, { "text": "In order to use the generated query for retrieving documents, the core algorithm of the document retrieval model is derived from the Vector Space Model (VSM). Our system takes this model to calculate the similarity of each indexed document according to the input query. The final scoring formula is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval Model", "sec_num": "3.3." }, { "text": ") , ( ) ( ) , ( ) , ( ) , ( d t norm bst t idf d t tf d q coord d q Score q t in u u u \u00a6 (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval Model", "sec_num": "3.3." }, { "text": "where tf(t,d) is the term frequency factor for term t in document d, idf(t) is the inverse document frequency of term t, while coord(q,d) is frequency of all the terms in query occur in a document. bst is a weight for each term in the query. Norm(t,d) encapsulates a few (indexing time) boost and length factors, for instance, weights for each document and field.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval Model", "sec_num": "3.3." }, { "text": "As a summary, many factors that could affect the overall score are taken into account in this model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval Model", "sec_num": "3.3." }, { "text": "In order to evaluate the retrieval performance of the proposed model on text of cross languages, we use the Europarl corpus which is the collection of parallel texts in 11 languages from the proceedings of the European Parliament [13] . The corpus is commonly used for the construction and evaluation of statistical machine translation 1 . The corpus consists of spoken records held at the European Parliament and are labeled with corresponding IDs (e.g. , ). The corpus is quite suitable for use in training the proposed probabilistic models between different language pairs (e.g. English-Spanish, English-French, English-German, etc.), as well as for evaluating retrieval performance of the system.", "cite_spans": [ { "start": 230, "end": 234, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 336, "end": 337, "text": "1", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1." }, { "text": "Among the existing CLIR approaches, the work of S\u00e1nchez-Mart\u00ednez et al. [8] based on SMT techniques and IBM Model 1 is very closed to our approach proposed in this paper. We take it as the benchmark and compare our model against this standard. In order to be able to compare with their results, we used the same datasets (training and testing data) for this evaluation. The chapters from April 1998 to October 2006 were used as a training set for model construction, both for training the Language Model (LM) and Translation Model (TM). While the chapters from April 1996 to March 1998 were considered as the testing set for evaluating the performance of the model. We split the test set into two parts: (1) TestSet1, where each chapter (split by label) is treated as a document, for tackling the large amount of information in long texts.", "cite_spans": [ { "start": 72, "end": 75, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1." }, { "text": "(2) TestSet2, where each paragraph (split by label) is treated as a document, for dealing with the low discrimination power. The analytical data of the corpus are presented in Table 1 . There are 1,022 documents in TestSet1, which is the number chapter that the data contains. The average document length of this dataset is 5,612 words. In TestSet2, after processing, the data contain 23,342 documents ( level) which are the splitting 1,022 chapters ( level) from TestSet1. 22 out of 100 documents are in the same topic ( level). Table 1 summarizes the number of documents, sentences, words and the average word number of each document. ", "cite_spans": [], "ref_spans": [ { "start": 189, "end": 196, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 579, "end": 586, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Datasets", "sec_num": "4.1." }, { "text": "In order to evaluate our proposed model, the following tools have been used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.2." }, { "text": "The probabilistic LMs are constructed on monolingual corpora by using the SRILM [15] . We use GIZA++ [16] to train the word alignment models for different pairs of languages of the Europarl corpus, and the phrase pairs that are consistent with the word alignment are extracted. For constructing the phrase-based statistical machine translation model, we use the open source Moses [17] toolkit, and the translation model is trained based on the log-linear model, as given in Eq. (2) . The workflow of constructing the translation model is illustrated in Fig. 2 and it consists of the following main steps 2 :", "cite_spans": [ { "start": 80, "end": 84, "text": "[15]", "ref_id": "BIBREF14" }, { "start": 101, "end": 105, "text": "[16]", "ref_id": "BIBREF15" }, { "start": 380, "end": 384, "text": "[17]", "ref_id": "BIBREF16" }, { "start": 478, "end": 481, "text": "(2)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 553, "end": 559, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.2." }, { "text": "(1) Preparation of aligned parallel corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.2." }, { "text": "(2) Preprocessing of training data: tokenization, case conversion, and sentences filtering where sentences with length greater than fifty words are removed from the corpus in order to comply with the requirement of Moses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.2." }, { "text": "(3) A 5-gram LM is trained on Spanish data with the SRILM toolkits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.2." }, { "text": "(4) The phrased-based STM model is therefore trained on the prepared parallel corpus (English-Spanish) based on log-linear model of by using the nine-steps suggested in Moses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.2." }, { "text": "Once LM and TM have been obtained, we evaluate the proposed method with the following steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. Main workflow of training phase", "sec_num": null }, { "text": "(1) The source documents are first translated into target language using the constructed translation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. Main workflow of training phase", "sec_num": null }, { "text": "(2) The words candidates are computed and ranked based on a TF -IDF algorithm and the n-best words candidates then are selected to form the query based on Eq. (3) and (4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. Main workflow of training phase", "sec_num": null }, { "text": "(3) All the target documents are stored and indexed using Apache Lucene 3 as our default search engine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. Main workflow of training phase", "sec_num": null }, { "text": "(4) In retrieval, target documents are scored and ranked by using the document retrieval model to return the list of most related documents with Eq. (5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. Main workflow of training phase", "sec_num": null }, { "text": "A number of experiments have been performed to investigate our proposed method on different settings. In order to evaluate the performance of the three independent models, we also conducted experiments to test them respectively before whole the CLIR experiment. The performance of the method is evaluated in terms of the average precision, that is, how often the target document is included within the first N-best candidate documents when retrieved. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "In this experiment, we want to evaluate the performance of the proposed system to retrieve documents (monolingual environment) given the query. It supposes that the translations of source documents are available, and the step to obtain the translation for the input document can therefore be neglected. Under such assumptions, the CLIR problem can be treated as normal IR in monolingual environment. In conducting the experiment, we used all of the source documents of TestSet1. The steps are similar to that of the testing phase as described in Section 4.2, excluding the translation step. The empirical results based on different configurations are presented in Table 2 , where the first column gives the number of documents returned against the number of words/terms used as the query.", "cite_spans": [], "ref_spans": [ { "start": 664, "end": 671, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Monolingual Environment Information Retrieval", "sec_num": "5.1." }, { "text": "The results show that the proposed method gives very high retrieval accuracy, with precision of 100%, when the top 18% of the words are used as the query. In case of taking the top 5 candidates of documents, the approach can always achieve a 100% of retrieval accuracy with query sizes between 8% and 18%. This fully illustrates the effectiveness of the retrieval model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Environment Information Retrieval", "sec_num": "5.1." }, { "text": "The overall retrieval performance of the system will be affected by the quality of translation. In order to have an idea the performance of the translation model we built, we employ the commonly used evaluation metric, BLEU, for such measure. The BLEU (Bilingual Evaluation Understudy) is a classical automatic evaluation method for the translation quality of an MT system [18] . In this evaluation, the translation model is created using the parallel corpus, as described in Section 4. We use another 5,000 sentences from the TestSet1 for evaluation 4 .", "cite_spans": [ { "start": 373, "end": 377, "text": "[18]", "ref_id": "BIBREF17" }, { "start": 551, "end": 552, "text": "4", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Quality", "sec_num": "5.2." }, { "text": "The BLEU value, we obtained, is 32.08. The result is higher than that of the results reported by Koehn in his work [14] , of which the BLEU score is 30.1 for the same language pair we used in Europarl corpora. Although we did not use exactly the same data for constructing the translation model, the value of 30.1 was presented as a baseline of the English-Spanish translation quality in Europarl corpora.", "cite_spans": [ { "start": 115, "end": 119, "text": "[14]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Quality", "sec_num": "5.2." }, { "text": "The BLEU score shows that our translation model performs very well, due to the large number of the training data we used and the pre-processing tasks we designed for cleaning the data. On the other hand, it reveals that the translation quality of our model is good.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Quality", "sec_num": "5.2." }, { "text": "In this section, the proposed CLIR model is compared against the approach proposed by S\u00e1nchez-Mart\u00ednez et al. Table 3 presents the retrieval results given by his model. As illustrated, the best precision of the model can achieve up to 97% in precision, counting that the desired document is returned as the most relevant document among the candidates. In his method, both the probability of the translations and the relevance of the terms are taken into account in the retrieval model. The model is created based on IBM Model 1, Eq. 1, however, it still has a problem as we stated in Section 2. In order to obtain a higher retrieval precision, our model has been improved from different points. First, we only use individual words, instead of phrases, as well as numbers as query, which can alleviate the scarcity of tending to select long phrases that are less occurred in the training data. Secondly, our method can do better in dealing with the problem of term disambiguation because of the phrase-based SMT system, which takes a wider context of sentence in producing considers the translation. Last but not least, we did not use a fixed number of query words, instead portion of most relevant words is considered for different input of the document, Eq. (4). In other words, the longer the document, the more words will be used for retrieval of the target documents. So the Size q is considered as a hidden variable in our document retrieval model.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 117, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation of CLIR Model", "sec_num": "5.3." }, { "text": "What still needs to be explained is that the metrics in Table 3 and 4 are different. One experiment selected static number of words for a query, so all the queries have the same size; while the other one considers the percentage of the document length as its corresponding query size. Although it is hard to compare with their performances from corresponding columns, the improvements can be seen clearly when the desired document is among the first N (N=1, 2, 5, 10, 20) documents retrieved. Reviewing the experimental results presented in Tables 3 and 4 , it shows that our model is able to give an improvement of 2% in precision and achieves 99% of success rate, in the case that the desired candidate is ranked in the first place. Moreover, the success rates achieved by our proposed model in different levels in all tests are above 90%.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 541, "end": 555, "text": "Tables 3 and 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation of CLIR Model", "sec_num": "5.3." }, { "text": "As expected, the more the words we used to generate the query, the more the documents returned, and the higher the rate that the target document is retrieved within the candidates list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of CLIR Model", "sec_num": "5.3." }, { "text": "However, the documents in TestSet1 are too large to align sentences from document level for further work, because a large document includes more sentences, which not only need more computational cost but also lead to higher error rate during sentence alignment. One way to solve this problem is to further split the large document and to retrieve it in a smaller document size. The problem in this case is that word overlap between a query and a wrong document is more probable when the document and the query are expressed in the same language. Furthermore, similar documents may include the same translation of words in the query, because the document retrieval model does not consider the weight of each word in the query which results in using more words to distinguish. This is the reason why different query size is used in Table 4 and 5, in order to guarantee the comparable retrieval performance on different types of documents. As we stated in Section 4.1, TestSet2 is another concern. The results obtained are presented in Table 5 . On average, the success rate is normally above 90% (in precision) by using a larger query size. It can even achieve 99.5% when the 5-best candidates are considered in the retrieval results. This result indicates that the reliable estimation of the profanities is more important than the plausibility of the probabilistic models. This fully illustrates the discrimination power of the proposed method.", "cite_spans": [], "ref_spans": [ { "start": 830, "end": 837, "text": "Table 4", "ref_id": null }, { "start": 1033, "end": 1040, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Evaluation of CLIR Model", "sec_num": "5.3." }, { "text": "This article presents a TQD statistical approach for CLIR which has been explored for both large and similar documents retrieval. Different from the traditional parallel corpora-based model which relies on IBM algorithm, we divided our CLIR model into three independent parts but all work together to deal with the term disambiguation, query generation and document retrieval. The performances showed that this method can do a good job of CLIR for not only large documents but also the similar documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "The speed efficiency may be another big issue in our approach as some researchers have stated 2 . However, with the increasing of computing ability in hardware and software, there will be no difference in speed efficiency between query and document translation-based CLIR. Besides, our system only translates a certain amount of the source document to be retrieved instead of all the indexed target documents.", "cite_spans": [ { "start": 94, "end": 95, "text": "2", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing(ROCLING 2012)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available online at http://www.statmt.org/europarl/. Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See http://www.statmt.org/wmt09/baseline.html for a detailed description of MOSES training options.Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at http://lucene.apache.org. Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See http://www.statmt.org/wmt09/baseline.html for a detailed description of MOSES evaluation options.Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Statistical methods for cross-language information retrieval", "authors": [ { "first": "L", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "23--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Ballesteros and W. B. Croft, \"Statistical methods for cross-language information retrieval,\" Cross-language information retrieval, pp. 23-40, 1998.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Technical issues of cross-language information retrieval: a review", "authors": [ { "first": "K", "middle": [], "last": "Kishida", "suffix": "" } ], "year": 2005, "venue": "", "volume": "41", "issue": "", "pages": "433--455", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Kishida, \"Technical issues of cross-language information retrieval: a review,\" Information Processing & Management, pp. 433-455, 41, 3 2005.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Cross-language information retrieval", "authors": [ { "first": "D", "middle": [ "W" ], "last": "Oard", "suffix": "" }, { "first": "A", "middle": [ "R" ], "last": "Diekema", "suffix": "" } ], "year": 1998, "venue": "", "volume": "33", "issue": "", "pages": "223--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. W. Oard and A. R. Diekema, \"Cross-language information retrieval,\" Annual review of Information science, 33, pp. 223-256, 1998.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Experiments with the eurospider retrieval system for clef", "authors": [ { "first": "M", "middle": [], "last": "Braschler", "suffix": "" }, { "first": "P", "middle": [], "last": "Schauble", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "140--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Braschler and P. Schauble, \"Experiments with the eurospider retrieval system for clef 2000,\" Cross-Language Information Retrieval and Evaluation, pp. 140-148, 2001.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Ad hoc, cross-language and spoken document information retrieval at IBM", "authors": [ { "first": "M", "middle": [], "last": "Franz", "suffix": "" } ], "year": 1999, "venue": "NIST Special Publication: The 8th Text Retrieval Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Franz et al, \"Ad hoc, cross-language and spoken document information retrieval at IBM,\" NIST Special Publication: The 8th Text Retrieval Conference, TREC-8, 1999.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Quilt: Implementing a large-scale cross-language text retrieval system", "authors": [ { "first": "M", "middle": [ "W" ], "last": "Davis", "suffix": "" }, { "first": "W", "middle": [ "C" ], "last": "Ogden", "suffix": "" } ], "year": 1997, "venue": "ACM SIGIR Forum", "volume": "31", "issue": "", "pages": "92--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. W. Davis and W. C. Ogden, \"Quilt: Implementing a large-scale cross-language text retrieval system,\" ACM SIGIR Forum, pp. 92-98, 31, SI 1997.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical cross-language information retrieval using n-best query translations", "authors": [ { "first": "M", "middle": [], "last": "Federico", "suffix": "" }, { "first": "N", "middle": [], "last": "Bertoldi", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "167--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Federico and N. Bertoldi, \"Statistical cross-language information retrieval using n-best query translations,\" Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 167-174, 2002.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Document translation retrieval based on statistical machine translation techniques", "authors": [ { "first": "F", "middle": [], "last": "Sanchez-Martinez", "suffix": "" }, { "first": "R", "middle": [ "C" ], "last": "Carrasco", "suffix": "" } ], "year": 2011, "venue": "Applied Artificial Intelligence", "volume": "25", "issue": "", "pages": "329--340", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Sanchez-Martinez and R. C. Carrasco, \"Document translation retrieval based on statistical machine translation techniques,\" Applied Artificial Intelligence, pp. 329-340, 25, 5 2011.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" } ], "year": 1993, "venue": "", "volume": "19", "issue": "", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. F. Brown et al, \"The mathematics of statistical machine translation: Parameter estimation,\" Computational linguistics, pp. 263-311, 19, 2 1993.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Phrasal translation and query expansion techniques for cross-language information retrieval", "authors": [ { "first": "L", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 1997, "venue": "ACM SIGIR Forum", "volume": "31", "issue": "", "pages": "84--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Ballesteros and W. B. Croft, \"Phrasal translation and query expansion techniques for cross-language information retrieval,\" ACM SIGIR Forum, pp. 84-91. 31, SI 1997.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Discriminative Training and Maximum Entropy Models for Statistical Machine Translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing", "volume": "", "issue": "", "pages": "295--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och, H. Ney, \"Discriminative Training and Maximum Entropy Models for Statistical Machine Translation,\" In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 295-302, Philadelphia, PA, July Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012) (2002)", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Using tf-idf to determine word relevance in document queries", "authors": [ { "first": "J", "middle": [], "last": "Ramos", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the First Instructional Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Ramos, \"Using tf-idf to determine word relevance in document queries,\" Proceedings of the First Instructional Conference on Machine Learning, 2003.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bridging the lexical chasm: statistical approaches to answer-finding", "authors": [ { "first": "A", "middle": [], "last": "Berger", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "192--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Berger et al, \"Bridging the lexical chasm: statistical approaches to answer-finding,\" Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pp. 192-199, 2000.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "", "volume": "5", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, \"Europarl: A parallel corpus for statistical machine translation,\" MT summit, 5, 2005.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "SRILM-an extensible language modeling toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Seventh International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Stolcke, \"SRILM-an extensible language modeling toolkit,\" Seventh International Conference on Spoken Language Processing, 2002.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "", "volume": "29", "issue": "", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och and H. Ney, \"A systematic comparison of various statistical alignment models,\" Computational linguistics. pp. 19-51, 29, 1 2003.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn et al, \"Moses: Open source toolkit for statistical machine translation,\" Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pp. 177-180, 2007.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni et al, \"BLEU: a method for automatic evaluation of machine translation,\" Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311-318, 2002.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "Approach for CLIR" }, "TABREF0": { "content": "
DatasetDocumentsSentencesSize of corpus WordsAve. words in document
Training Set2,9001,902,05023,411,54550
TestSet11,02280,0005,735,4646,612
TestSet223,34280,0007,217,827309
", "html": null, "type_str": "table", "text": "Analytical Data of Corpus", "num": null }, "TABREF1": { "content": "
RetrievedQuery Size (Size q in %)
Documents (N-Best)24810141820
10.7940.9100.9930.9890.9861.0000.989
50.9210.9641.0001.0001.0001.0000.996
100.9420.9711.0001.0001.0001.0000.996
200.9460.9781.0001.0001.0001.0000.996
", "html": null, "type_str": "table", "text": "The average precision in Monolingual Environment", "num": null }, "TABREF2": { "content": "
RetrievedQuery size (Num. of word in query)
Documents (N-Best)1251 0
10.320.510.840.97
20.430.630.900.98
50.510.730.950.99
100.550.770.971.00
200.560.800.981.00
Table 4. The retrieval results on TestSet1
RetrievedQuery Size (Size q in %)
Documents (N-Best)1.01.41.82.03.06.010.0
10.900.930.950.970.991.000.99
50.980.980.990.990.991.000.99
100.980.980.990.991.001.001.00
200.980.991.001.001.001.001.00
", "html": null, "type_str": "table", "text": "The average precision of S\u00e1nchez-Mart\u00ednez et al.", "num": null }, "TABREF3": { "content": "
Retrieved Documents (N-Best)1015Query Size (Size q in %) 20 25 303540
10.8840.9360.9640.9720.9830.9870.990
50.9440.9700.9840.9890.9920.9930.995
100.9550.9770.9870.9910.9930.9940.996
200.9660.9840.9910.9920.9940.9940.997
", "html": null, "type_str": "table", "text": "The retrieval results on TestSet2", "num": null } } } }