|
{ |
|
"paper_id": "O08-3002", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:02:31.121498Z" |
|
}, |
|
"title": "Two Approaches for Multilingual Question Answering: Merging Passages vs. Merging Answers", |
|
"authors": [ |
|
{ |
|
"first": "Rita", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Aceves-P\u00e9rez", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Laboratory of Language Technologies", |
|
"institution": "National Institute of Astrophysics", |
|
"location": { |
|
"addrLine": "Luis Enrique Erro #1" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Manuel", |
|
"middle": [], |
|
"last": "Montes-Y-G\u00f3mez", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Laboratory of Language Technologies", |
|
"institution": "National Institute of Astrophysics", |
|
"location": { |
|
"addrLine": "Luis Enrique Erro #1" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Villase\u00f1or-Pineda", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Laboratory of Language Technologies", |
|
"institution": "National Institute of Astrophysics", |
|
"location": { |
|
"addrLine": "Luis Enrique Erro #1" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Alfonso Ure\u00f1a-L\u00f3pez", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Campus Las Lagunillas s/n", |
|
"institution": "University of Ja\u00e9n", |
|
"location": { |
|
"addrLine": "Edif D3", |
|
"settlement": "Ja\u00e9n", |
|
"country": "Spain" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "One major problem in multilingual Question Answering (QA) is the integration of information obtained from different languages into one single ranked list. This paper proposes two different architectures to overcome this problem. The first one performs the information merging at passage level, whereas the second does it at answer level. In both cases, we applied a set of traditional merging strategies from cross-lingual information retrieval. Experimental results evidence the appropriateness of these merging strategies for the task of multilingual QA, as well as the advantages of multilingual QA over the traditional monolingual approach.", |
|
"pdf_parse": { |
|
"paper_id": "O08-3002", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "One major problem in multilingual Question Answering (QA) is the integration of information obtained from different languages into one single ranked list. This paper proposes two different architectures to overcome this problem. The first one performs the information merging at passage level, whereas the second does it at answer level. In both cases, we applied a set of traditional merging strategies from cross-lingual information retrieval. Experimental results evidence the appropriateness of these merging strategies for the task of multilingual QA, as well as the advantages of multilingual QA over the traditional monolingual approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Question Answering (QA) has become a promising research field whose aim is to provide more natural access to textual information than traditional document retrieval techniques [Laurent et al. 2006] . In essence, a QA system is a kind of search engine that responds to natural language questions with concise and precise answers. For instance, given the question \"Where is the Popocatepetl Volcano located?\", a QA system has to respond \"Mexico\", instead of returning a list of related documents to the volcano.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 197, |
|
"text": "[Laurent et al. 2006]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "There are two recognizable kinds of QA systems that allow management of information in various languages: cross-lingual QA systems and, strictly speaking, multilingual QA systems. The former addresses a situation where questions are formulated in a different language from that of the (single) document collection. The other, in contrast, performs the search over two or more document collections in different languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "It is important to mention that both kinds of systems have some advantages over standard monolingual QA. They mainly allow users to access more information in an easier and faster way than monolingual systems. However, they also introduce additional issues due to the language barrier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Generally speaking, a multilingual QA system can be described as an ensemble of several monolingual systems, where each one works on a different -monolingual -document collection. Under this schema, two additional tasks are required: first, the translation of incoming questions into all target languages, and second, the combination of relevant information extracted from different languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The first problem, namely, the translation of questions from one language to another, has been widely studied in the context of cross-language QA [Aceves-P\u00e9rez et al. 2007; Neumann et al. 2005; Rosso et al. 2007; Sutcliffe et al. 2005] . In contrast, the second task, the merging of information obtained from different languages, has not been specifically addressed in QA. Nevertheless, it is important to mention that there is significant work on combining capacities from several monolingual QA systems [Chu-Carroll et al. 2003; Ahn et al. 2004; Sangoi-Pizzato et al. 2005] , as well as on merging multilingual lists of documents for cross-lingual information retrieval applications [Lin et al. 2002; Savoy et al. 2004] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 172, |
|
"text": "[Aceves-P\u00e9rez et al. 2007;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 193, |
|
"text": "Neumann et al. 2005;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 212, |
|
"text": "Rosso et al. 2007;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 235, |
|
"text": "Sutcliffe et al. 2005]", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 505, |
|
"end": 530, |
|
"text": "[Chu-Carroll et al. 2003;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 531, |
|
"end": 547, |
|
"text": "Ahn et al. 2004;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 575, |
|
"text": "Sangoi-Pizzato et al. 2005]", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 685, |
|
"end": 702, |
|
"text": "[Lin et al. 2002;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 721, |
|
"text": "Savoy et al. 2004]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In line with these previous works, in this paper we propose two different architectures for multilingual question answering. These architectures differ from each other by the way they handle the combination of multilingual information. Mainly, they take advantage of the pipeline architecture of monolingual QA systems (which includes three main modules, one for question classification, one for passage retrieval, and one for answer extraction) to achieve this combination at two different stages: after the passage retrieval module by mixing together the sets of recovered passages, or after the answer extraction module by directly combining all extracted answers. In other words, our first architecture performs the combination at passage level, whereas the second approach does it at answer level. In both cases, we applied a set of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "well-known strategies for information merging from cross-lingual information retrieval, specifically, Round Robin, Raw Score Value (RSV), CombSUM, and CombMNZ [Lee et al. 1997; Lin et al. 2002; Savoy et al. 2004] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 176, |
|
"text": "[Lee et al. 1997;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 177, |
|
"end": 193, |
|
"text": "Lin et al. 2002;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 212, |
|
"text": "Savoy et al. 2004]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Passages vs. Merging Answers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The contributions of this paper are two-fold. On the one hand, it represents -to our knowledge -the first attempt for doing \"multilingual\" QA. In particular, it proposes and compares two initial solutions to the problem of multilingual information merging in QA. In addition, this paper also provides some insights on the use of traditional ranking strategies from cross-language information retrieval into the context of multilingual QA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Passages vs. Merging Answers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The rest of the paper is organized as follows. Section 2 describes some previous works on information merging. Section 3 presents the proposed architectures for multilingual QA. Section 4 describes the procedures for passage and answer merging. Section 5 shows some experimental results. Finally, section 6 presents our conclusions and outlines future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Passages vs. Merging Answers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As we previously mentioned, a multilingual QA system has to consider, in addition to the traditional modules for monolingual QA, stages for question translation and information merging.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The problem of question translation has already been widely studied. Most current approaches rest on the idea of combining capacities of several translation machines. They mainly consider the selection of the best instance from a given set of translations [Aceves-P\u00e9rez et al. 2007; Rosso et al. 2007] as well as the construction of a new question reformulation by gathering terms from all of them [Neumann et al. 2005; Sutcliffe et al. 2005; Aceves-P\u00e9rez et al. 2007] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 282, |
|
"text": "[Aceves-P\u00e9rez et al. 2007;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 301, |
|
"text": "Rosso et al. 2007]", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 419, |
|
"text": "[Neumann et al. 2005;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 442, |
|
"text": "Sutcliffe et al. 2005;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 468, |
|
"text": "Aceves-P\u00e9rez et al. 2007]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "On the other hand, the problem of information merging in multilingual QA has not yet been addressed. However, there is some relevant related work on constructing ensembles of monolingual QA systems. For instance, [Ahn et al. 2004] proposes a method that performs a number of sequential searches over different document collections. At each iteration, this method filters out or confirms the answers found in the previous step. Chu-Carroll et al. [2003] describes a method that applies a general ranking over the five-top answers obtained from different collections. They use a ranking function that is inspired in the well-known RSV technique from cross-language information retrieval. Finally, Sangoi-Pizzato et al. [2005] uses various search engines in order to extract from the Web a set of candidate answers for a given question. It also applies a general ranking over the extracted answers; nevertheless, in this case the ranking function is based on the confidence of search engines instead that on the redundancy of individual answers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 230, |
|
"text": "[Ahn et al. 2004]", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 452, |
|
"text": "Chu-Carroll et al. [2003]", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 695, |
|
"end": 723, |
|
"text": "Sangoi-Pizzato et al. [2005]", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Our proposal mainly differs from previous methods in that it not only considers the integration of answers but also takes into account the combination of passages. That is, it also proposes a method that carries out the information merging at an internal stage of the QA process. The proposed merging approach is similar in spirit to Chu-Carroll et al. [2003] and Sangoi-Pizzato et al. [2005] in that it also applies a general ranking over the information extracted from different languages. Like Chu-Carroll et al. [2003] , it uses the RSV ranking function, although it also applies other traditional ranking strategies such as Round Robin, CombSUM and CombMNZ.", |
|
"cite_spans": [ |
|
{ |
|
"start": 334, |
|
"end": 359, |
|
"text": "Chu-Carroll et al. [2003]", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 392, |
|
"text": "[2005]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 497, |
|
"end": 522, |
|
"text": "Chu-Carroll et al. [2003]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The traditional architecture of a monolingual QA system considers three basic modules: (i) question classification, where the type of expected answer is determined; (ii) passage retrieval, where the passages with the greatest probability to contain the answer are obtained from the target document collection; and (iii) answer extraction, where candidate answers are ranked and the final answer recommendation of the system is produced. In addition, a multilingual QA system must include two other modules, one for question translation and another for information merging. The purpose of the first module is to translate the input question to all target languages, whereas the second module is intended to integrate the information extracted from these languages into one single ranked list.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two Architectures for Multilingual QA", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Figures 1 and 2 show two different architectures for multilingual QA. For the sake of simplicity, in both cases, we do not consider the module for question classification. On the one hand, Figure 1 shows a multilingual QA architecture that does the information merging at passage level. The idea of this approach is to perform in parallel the recovery of relevant passages from all collections (i.e., from all different languages), then integrate these passages into one single ranked list, and then extract the answer from the combined set of passages. On the contrary, Figure 2 illustrates an architecture that achieves the information merging at answer level. In this case, the idea is to perform the complete QA process independently in all languages, and, after that, integrate the sets of answers into one single ranked list.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 197, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 579, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Two Architectures for Multilingual QA", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "It is important to mention that merging processes normally rely on the translation of information to a common language. This translation is required for some merging strategies in order to be able to compare and rank the passages and answers extracted from different The two proposed architectures have different advantages and disadvantages. For instance, doing the information merging at passage level commonly allows obtaining better translations for named entities (possible answers) since they are immersed in an extended context. On the other hand, doing the merging at answer level has the advantage of a clear (unambiguous) comparison of the multilingual information. In other words, comparing two answers (named entities) is a straightforward step, whereas comparing two passages requires the definition of a similarity measure and the determination of a criterion about how similar two different passages should be in order to be considered as equal. This previous problem is not present in monolingual QA ensembles, since in that case all individual QA systems search on the same document collection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two Architectures for Multilingual QA", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The following section introduces some of the most popular information merging strategies used in the task of cross-lingual information retrieval. It also describes the way these strategies are used within the proposed architectures for integrating passages and answers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two Architectures for Multilingual QA", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Integrating information retrieved from different document collections or by different search engines is a longstanding problem in information retrieval. Researchers in this field have proposed several strategies for information merging; traditional ones are: Round Robin, RSV (Raw Score Value), CombSUM, and CombMNZ [Lee et al. 1997; Lin et al. 2002] . However, more sophisticated strategies have been proposed recently, such as the 2-step RSV [Mart\u00ednez-Santiago et al. 2006] , and the Z-score value [Savoy et al. 2004] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 333, |
|
"text": "[Lee et al. 1997;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 334, |
|
"end": 350, |
|
"text": "Lin et al. 2002]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 475, |
|
"text": "[Mart\u00ednez-Santiago et al. 2006]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 500, |
|
"end": 519, |
|
"text": "[Savoy et al. 2004]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Strategies", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this work, we mainly study the application of traditional merging strategies in the context of multilingual QA. The following paragraphs give a brief description of these strategies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Strategies", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Round Robin. The retrieved information (in this case, passages or answers) from different languages is interleaved according to its original monolingual rank. In other words, this strategy takes one result in turn from each individual list and alternates them in order to construct the final merged output. The hypothesis underlying this strategy is the homogeneous distribution of relevant information across all languages. In our particular case, as described in Table 1 , this restriction was fulfilled for almost 60% of test questions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 465, |
|
"end": 472, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Merging Strategies", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ". This strategy sorts all results (passages or answers) by their original score computed independently from each monolingual collection. Differing from Round Robin, this approach is based on the assumption that scores across different collections are comparable. Therefore, this method tends to work well when different collections are Merging Passages vs. Merging Answers searched by the same or very similar methods. In our experiments (refer to Section 5), this condition was fully satisfied since it was applied the same QA system for all languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw Score Value (RSV)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CombSUM. In this strategy, the result scores from each language are initially (min-max) normalized. Afterward, the scores of duplicated results occurring in multiple collections are summed. In particular, we considered the implementation proposed by Lee et al. [1997] : we assigned a score of 21-i to the i-th ranked result from the top 20 of each language, this way, the top passage or answer was scored 20, the second one was scored 19, and so on. Any result not ranked in the top 20 was scored as 0. Finally, we added scores of duplicated results for all different monolingual runs and ranked these results in accordance to their new joint score. For instance, if an answer is ranked 3 rd for one language, 10 th for other one, and does not exist in a third language, then its score is (21-3) + (21-10) + 0 = 29.", |
|
"cite_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 267, |
|
"text": "Lee et al. [1997]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw Score Value (RSV)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CombMNZ. It is based on the same normalization as CombSUM, but also attempts to account for the value of multiple evidence by multiplying the sum of the scores (CombSUM-value) of a result by the number of monolingual collections in which it occurs. Therefore, it can be said that CombSUM is equivalent to averaging, whereas CombMNZ is equivalent to weighted averaging. Using the same example as for the CombSUM strategy, the answer's score is in this case 2 \u00d7 ((21-3) + (21-10) + 0) = 58.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw Score Value (RSV)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It is important to point out that Round Robin and RSV strategies take advantage of the complementarity among collections (when answers are extracted from only one language), whereas ComSUM and CombMNZ also take into account the redundancies of answers (the repeated occurrence of an answer in several languages).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw Score Value (RSV)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given several sets of relevant passages obtained from different languages, the procedure for passage merging considers the following two basic steps:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Procedures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "1. Translate all passages into one common language. This translation can be done by means of any translation method or online translation machine. However, we suggest translating all passages into the original question's language in order to avoid translation errors in at least one passage set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Procedures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "It is important to clarify that translation is only required by the CombSUM and CombMNZ strategies. Nevertheless, all passages should be translated to one common language before entering the answer extraction module.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Procedures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "2. Combine the sets of passages according to a selected merging strategy. In the case of using the Round Robin or RSV approaches, the combination of passages is straightforward. In contrast, when applying CombSUM or CombMNZ, it is necessary to determine the occurrence of a given passage in two or more collections. Given that it is practically impossible to obtain exactly the same passage from two different collections, it is necessary to define a criterion about how similar two different passages should be in order to be considered as equal. In particular, we measure the similarity of two passages by the Jaccard function (calculated as the cardinality of their vocabulary intersection divided by the cardinality of their vocabulary union) and consider them as equal only if their similarity is greater than a given specified threshold (empirically, we set the threshold value to 0.5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Procedures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The procedure for answer merging is practically the same as that for passage merging. It also includes one step for answer translation and another step for answer combination. However, the combination of answers is much simpler than the combination of passages, since they are directly comparable. In this case, the application of all merging strategies is straightforward.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Procedures", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The following paragraphs describe the data and tools used in the experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Languages. We considered three different languages: Spanish, Italian, and French.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Search Collections. We used the document sets from the QA@CLEF evaluation forum. In particular, the Spanish collection consists of 454,045 news documents, the Italian set has 157,558, and the French one contains 129,806.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Test questions. We selected a subset of 170 factoid questions from the MultiEight corpus of CLEF. From all these questions at least one monolingual QA system could extract the correct answer. Table 1 shows answer's distributions across all languages. It is important to note that this set of questions covers all types of currently-evaluated factoid questions; therefore, it is possible to formulate some accurate conclusions about the appropriateness of the proposed architectures.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 199, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Monolingual QA System. We used the passage retrieval and answer extraction components of the TOVA question answering system [Montes-y-G\u00f3mez et al. 2005] . Its selection was mainly supported by its competence in dealing with all the considered languages. Indeed, it obtained the best precision rate for Italian and the second best for both Spanish and", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 152, |
|
"text": "[Montes-y-G\u00f3mez et al. 2005]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "French in the CLEF-2005 evaluation exercise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Passages vs. Merging Answers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Translation Machine. The translation of passages and answers was done using the Systran online translation machine (www.systranbox.com). On the other hand, questions were manually translated in order to avoid mistakes at early stages and therefore focus the evaluation on the merging phase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Passages vs. Merging Answers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Merging strategies. As we mentioned in the previous section, we applied four traditional merging strategies, namely, Round Robin, RSV, CombSUM, and CombMNZ.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Passages vs. Merging Answers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Evaluation Measure. In all experiments, we used the precision as the evaluation measure. It indicates the general proportion of correctly answered questions. In order to enhance the analysis of results, we show the precision at one, three, and five positions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Passages vs. Merging Answers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Baseline. We decided to use the results from the best monolingual system (the Spanish system in this case) as a baseline. In this way, it is possible to reach conclusions about the advantages of multilingual QA over the standard monolingual approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Passages vs. Merging Answers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The objectives of the experiments were twofold: first, to compare the performance of both architectures; and second, to study the applicability and usefulness of traditional merging strategies in the problem of multilingual QA. Additionally, these experiments allowed us to analyze the advantages of multilingual QA over the traditional monolingual approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The first experiment considered information merging at passage level. In this case, the passages obtained from different languages were combined, and the 20 top-ranked were delivered to the answer extraction module. Table 2 shows the precision results obtained using all merging strategies as well as the precision rates of the best monolingual run.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 223, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "From Table 2 , it is clear that merging strategies relying on the complementarity of information (such as Round Robin and RSV) obtain better results than those also considering its redundancy (e.g. CombSUM and CombMNZ). We hypothesize that this behavior was mainly produced by three different factors: (i) the impact of translation errors on the CombSUM and CombMNZ strategies 1 ; (ii) the complexity of assessing the redundancy of passages, i.e., the complexity of correctly deciding whether two different passages should be considered as equal; and (iii) the large number of questions (42%) that have an answer in just one language. The second experiment achieved information merging at answer level. In this experiment, we considered the 10 top-ranked answers from each monolingual QA system. Table 3 shows the results obtained using all different merging strategies. Table 3 are encouraging. They show that all merging strategies achieved high performance levels, improving baseline results at the third and fifth positions by more than 7% and 8%, respectively. Once again, these results indicate that simple strategies outperformed complex ones. However, they do not necessarily mean that Round Robin and RSV are better than CombSum and CombMNZ, instead they only express that the former methods are less sensitive to translation errors.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 12, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 796, |
|
"end": 803, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 871, |
|
"end": 878, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Comparing the results of both architectures, it is easy to observe that merging answers obtained better precision rates than merging passages. It seems that this situation is because the combination of answers is easier than the combination of passages; therefore, the first one allows to better taking advantage of both the complementarity as well as the redundancy of information. This phenomenon is more evident in the performance of CombSUM and CombMNZ; in the case of passage merging, their results were always below the baseline, and were -on average -6% below the best precision rate, whereas, in answer merging, they were only 3% below the best result.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table 3. Precision results of the answer merging approach", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In addition, the fact that RSV was the best strategy for passage merging and Round Robin for answer merging shows, on the one hand, the pertinence of the passage scores against the low confidence of the answer scores, and on the other hand, the homogeneous distribution of the answers in all languages (from Table 1 : 65% of the questions has an answer -at the first 20 positions-in Spanish, 55% in French and 55% in Italian).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 315, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Merging Passages vs. Merging Answers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The problem of cross-lingual QA has been widely studied; nevertheless -to our knowledgethere are no specific solutions to the related problem of multilingual QA. This paper focused on this new direction. It proposed two different architectures for multilingual QA. One of them performs information merging at passage level, whereas the other does it at answer level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "A secondary contribution of our work, but not necessarily less important, is the study of the usefulness of traditional ranking strategies from cross-language information retrieval into the context of multilingual QA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "The presented experimental results allowed us to reach the following conclusions: A multilingual QA system may help respond to a larger number of questions than a traditional monolingual QA system. Considering that practical QA systems supply lists of candidate answers instead of isolated responses, our results demonstrated that, using a simple multilingual QA approach, it was possible to answer up to 10% more questions than using a traditional monolingual system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Merging answers seems to be more convenient than merging passages. This assertion is mainly supported by the fact that it is more difficult to observe and compute the information redundancy at passage level than at answer level. In addition, the results of passage merging will inevitably be affected by the (quality of the) answer extraction module, whereas the results of answer merging are the actual output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Translation errors directly affect the performance of some merging strategies. It seems that merging strategies such as CombSUM and CombMNZ are more relevant than the rest (simple ones, such as Round Robin and RSV). However, our results demonstrate that they are more sensitive to translation mistakes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Finally, in order to improve the results of multilingual QA we plan to investigate the following issues:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "1. Using different criteria to evaluate the similarity between passages. In particular, we consider that this action can have an important influence on the performance of strategies based on the information redundancy, such as CombSUM and CombMNZ.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "2. Using ensemble methods for improving the translation of passages and answers. We plan to work with methods that combine the capacities of several translation machines by selecting the best instance from a given set of translations or by constructing a new translation reformulation by gathering terms from all of them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "3. Using new merging strategies. In particular, we are considering applying graph and probabilistic based ranking techniques. We believe these kinds of techniques will help develop more robust multilingual merging strategies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "We do not have an exact estimation of the translation errors for this task, but we suppose they are very abundant. This supposition is based on current reports from cross-lingual QA[Vallin et al. 2005] which indicate severe reductions -as high as 60% -in precision results as a consequence of unsatisfactory question translations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was done under partial support of CONACYT (Project Grant 43990), SNI-Mexico, and the Human Language Technologies Laboratory at INAOE. We also want to thanks to the CLEF organization as well as the EFE agency for the resources provided.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Enhancing Cross-Language Question Answering by Combining Multiple Question Translations", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Aceves-P\u00e9rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Montes-Y-G\u00f3mez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Villase\u00f1or-Pineda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 8th Internactional Conference in Computational Linguistics and Intelligent Text Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "485--493", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aceves-P\u00e9rez, R., M. Montes-y-G\u00f3mez, and L. Villase\u00f1or-Pineda, \"Enhancing Cross-Language Question Answering by Combining Multiple Question Translations,\" In Proceedings of the 8th Internactional Conference in Computational Linguistics and Intelligent Text Processing CICLing-2007, 2007, Mexico City, Mexico, pp. 485-493.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Making Stone Soup: Evaluating a Recall-Oriented Multi-stream Question Answering System for Dutch", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Jijkoun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "M\u00fcller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "De Rijke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Schlobach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Mishne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 5th Workshop of the Cross-Language Evaluation Forum CLEF", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "423--434", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahn, D., V. Jijkoun, K. M\u00fcller, M. de Rijke, S. Schlobach, and G. Mishne, \"Making Stone Soup: Evaluating a Recall-Oriented Multi-stream Question Answering System for Dutch,\" In Proceedings of the 5th Workshop of the Cross-Language Evaluation Forum CLEF 2004, 2004, Bath, UK, pp. 423-434.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "In Question Answering, Two Heads are Better than One", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chu-Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Czuba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Prager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ittycheriah", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology HLT-NAACL 2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "24--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chu-Carroll, J., K. Czuba, A. J. Prager, and A. Ittycheriah, \"In Question Answering, Two Heads are Better than One,\" In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology HLT-NAACL 2003, 2003, Edmonton, Canada, pp. 24-31.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "QA better than IR?", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Laurent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "S\u00e9gu\u00e9la", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "N\u00e8gre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Workshop on Multilingual Question Answering MLQA-2006", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laurent, D., P. S\u00e9gu\u00e9la, and S. N\u00e8gre, \"QA better than IR?,\" In Proceedings of the Workshop on Multilingual Question Answering MLQA-2006, 2006, Trento, Italy, pp. 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Analysis of Multiple Evidence Combination", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the 20th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "267--276", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lee, J., \"Analysis of Multiple Evidence Combination,\" In Proceedings of the 20th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1997, Philadelphia, Pennsylvania, United States, pp. 267-276.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Merging Mechanisms in Multilingual Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Third Workshop of the Cross-Language Evaluation Forum CLEF 2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "175--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, W. C., and H. H. Chen, \"Merging Mechanisms in Multilingual Information Retrieval,\" In Proceedings of the Third Workshop of the Cross-Language Evaluation Forum CLEF 2002, 2002, Rome, Italy, pp. 175-186.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A Merging Strategy Proposal: The 2-step Retrieval Status Value Method", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Mart\u00ednez-Santiago", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ure\u00f1a-L\u00f3pez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mart\u00edn-Valdivia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Information Retrieval", |
|
"volume": "9", |
|
"issue": "1", |
|
"pages": "71--93", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mart\u00ednez-Santiago, F., L. A. Ure\u00f1a-L\u00f3pez, and M. Mart\u00edn-Valdivia, \"A Merging Strategy Proposal: The 2-step Retrieval Status Value Method,\" Information Retrieval, 9(1), 2006, pp. 71-93.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Full Data-Driven System for Multiple Language Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Montes-Y-G\u00f3mez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Villase\u00f1or-Pineda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "P\u00e9rez-Couti\u00f1o", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "G\u00f3mez-Soriano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Sanchis-Arnal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 6th Workshop of the Cross-Language Evalution Forum CLEF 2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "420--428", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Montes-y-G\u00f3mez, M., L. Villase\u00f1or-Pineda, M. P\u00e9rez-Couti\u00f1o, J. M. G\u00f3mez-Soriano, E. Sanchis-Arnal, and P. Rosso, \"A Full Data-Driven System for Multiple Language Question Answering,\" In Proceedings of the 6th Workshop of the Cross-Language Evalution Forum CLEF 2005, 2005, Vienna, Austria. pp. 420-428.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Merging Passages vs. Merging Answers", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Merging Passages vs. Merging Answers", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Experiments on Cross-Linguality and Question-Type Driven Strategy Selection for Open-Domain QA", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Sacaleanu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 6th Workshop of the Cross-Language Evalution Forum CLEF 2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "429--438", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Neumann, G., and B. Sacaleanu, \"Experiments on Cross-Linguality and Question-Type Driven Strategy Selection for Open-Domain QA,\" In Proceedings of the 6th Workshop of the Cross-Language Evalution Forum CLEF 2005, 2005, Vienna, Austria, pp. 429-438.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Web-based Selection of Optimal Translations of Short Queries", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Buscaldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Iskra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Procesamiento de Lenguaje Natural", |
|
"volume": "38", |
|
"issue": "", |
|
"pages": "49--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rosso, P., D. Buscaldi, and M. Iskra, \"Web-based Selection of Optimal Translations of Short Queries,\" Procesamiento de Lenguaje Natural, 38, 2007, pp.49-52.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Extracting Exact Answers using a Meta Question Answering System", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Sangoi-Pizzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Molla-Aliod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Australasian Language Technology Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "105--112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sangoi-Pizzato, L. A., and D. Molla-Aliod, \"Extracting Exact Answers using a Meta Question Answering System,\" In Proceedings of the Australasian Language Technology Workshop, 2005, Sidney, Australia, pp. 105-112.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Selection and Merging Strategies for Multilingual Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Savoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Berger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 5th Workshop of the Cross-Language Evaluation Forum CLEF", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "27--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Savoy, J., and P. Y. Berger, \"Selection and Merging Strategies for Multilingual Information Retrieval,\" In Proceedings of the 5th Workshop of the Cross-Language Evaluation Forum CLEF 2004, 2004, Bath, UK, pp. 27-37.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Cross-Language French-English Question Answering Using the DLT System at CLEF 2005", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sutcliffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mulcahy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Gabbay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "O'gorman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "White", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Slatter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 6th Workshop of the Cross-Language Evalution Forum CLEF 2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "502--509", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sutcliffe, R., M. Mulcahy, I. Gabbay, A. O'Gorman, K. White, and D. Slatter, \"Cross-Language French-English Question Answering Using the DLT System at CLEF 2005,\" In Proceedings of the 6th Workshop of the Cross-Language Evalution Forum CLEF 2005, 2005, Vienna, Austria, pp. 502-509.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Multilingual Question Answering Track", |
|
"authors": [], |
|
"year": 2005, |
|
"venue": "Proceedings of the 6th Workshop of the Cross-Language Evalution Forum CLEF 2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "307--331", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Multilingual Question Answering Track,\" In Proceedings of the 6th Workshop of the Cross-Language Evalution Forum CLEF 2005, 2005, Vienna, Austria, pp. 307-331.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Multilingual QA based on passage merging Merging Passages vs. Merging Answers", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Multilingual QA based on answer merging languages.", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td/><td>Question Question</td><td/><td/><td/></tr><tr><td/><td>Language x Language x</td><td/><td/><td/></tr><tr><td/><td>Translation Translation</td><td/><td>Translation Translation</td><td/></tr><tr><td/><td>Module Module</td><td/><td>Module Module</td><td/></tr><tr><td>Question Question</td><td>Question Question</td><td/><td>Question Question</td><td/></tr><tr><td>Language x Language x</td><td colspan=\"2\">Language y Language y</td><td colspan=\"2\">Language z Language z</td></tr><tr><td>Collection Collection</td><td/><td>Collection Collection</td><td/><td>Collection Collection</td></tr><tr><td>Language x Language x</td><td>Passage Passage</td><td>Language y Language y</td><td>Passage Passage</td><td>Language z Language z</td></tr><tr><td/><td>Retrieval Retrieval</td><td/><td>Retrieval Retrieval</td><td/></tr><tr><td>Relevant Relevant</td><td colspan=\"2\">Relevant Relevant</td><td colspan=\"2\">Relevant Relevant</td></tr><tr><td>Passages Passages</td><td colspan=\"2\">Passages Passages</td><td colspan=\"2\">Passages Passages</td></tr><tr><td>Language x Language x</td><td colspan=\"2\">Language y Language y</td><td colspan=\"2\">Language z Language z</td></tr><tr><td/><td colspan=\"2\">Combined</td><td/><td/></tr><tr><td/><td colspan=\"2\">Passages</td><td/><td/></tr><tr><td/><td>Answer</td><td/><td/><td/></tr><tr><td/><td>Extraction</td><td/><td/><td/></tr><tr><td/><td>Answer</td><td/><td/><td/></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td/><td/><td>Question Question</td><td/><td/><td/></tr><tr><td/><td/><td>Language x Language x</td><td/><td/><td/></tr><tr><td/><td/><td>Translation Translation</td><td/><td>Translation Translation</td><td/></tr><tr><td/><td/><td>Module Module</td><td/><td>Module Module</td><td/></tr><tr><td>Question Question</td><td/><td>Question Question</td><td/><td>Question Question</td><td/></tr><tr><td colspan=\"2\">Language x Language x</td><td colspan=\"2\">Language y Language y</td><td colspan=\"2\">Language z Language z</td></tr><tr><td/><td>Collection Collection</td><td/><td>Collection Collection</td><td/><td>Collection Collection</td></tr><tr><td>Passage</td><td>Language x Language x</td><td>Passage Passage</td><td>Language y Language y</td><td>Passage Passage</td><td>Language z Language z</td></tr><tr><td>Retrieval</td><td/><td>Retrieval Retrieval</td><td/><td>Retrieval Retrieval</td><td/></tr><tr><td colspan=\"2\">Relevant Relevant</td><td colspan=\"2\">Relevant Relevant</td><td colspan=\"2\">Relevant Relevant</td></tr><tr><td colspan=\"2\">Passages Passages</td><td colspan=\"2\">Passages Passages</td><td colspan=\"2\">Passages Passages</td></tr><tr><td colspan=\"2\">Language x Language x</td><td colspan=\"2\">Language y Language y</td><td colspan=\"2\">Language z Language z</td></tr><tr><td colspan=\"2\">Candidate Candidate</td><td colspan=\"2\">Candidate Candidate</td><td colspan=\"2\">Candidate Candidate</td></tr><tr><td colspan=\"2\">Answers Answers</td><td colspan=\"2\">Answers Answers</td><td colspan=\"2\">Answers Answers</td></tr><tr><td colspan=\"2\">Language x Language x</td><td colspan=\"2\">Language y Language y</td><td colspan=\"2\">Language z Language z</td></tr><tr><td/><td/><td>Answer</td><td/><td/><td/></tr><tr><td/><td/><td>Merging</td><td/><td/><td/></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td/><td>SP</td><td>FR</td><td colspan=\"5\">IT SP, FR SP, IT FR, IT SP, FR, IT</td></tr><tr><td>Questions</td><td>37</td><td>21</td><td>15</td><td>20</td><td>25</td><td>23</td><td>29</td></tr><tr><td colspan=\"4\">Percentage 21% 12% 9%</td><td>12%</td><td>15%</td><td>14%</td><td>17%</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>Merging Strategy</td><td>1 st</td><td colspan=\"2\">Precision at: 3 rd</td><td>5 th</td></tr><tr><td>Round Robin</td><td colspan=\"2\">0.41</td><td colspan=\"2\">0.57 0.65</td></tr><tr><td>RSV</td><td colspan=\"2\">0.45</td><td colspan=\"2\">0.65 0.66</td></tr><tr><td>CombSUM</td><td colspan=\"2\">0.40</td><td colspan=\"2\">0.54 0.64</td></tr><tr><td>CombMNZ</td><td colspan=\"2\">0.40</td><td colspan=\"2\">0.54 0.63</td></tr><tr><td>Best Monolingual</td><td colspan=\"2\">0.45</td><td colspan=\"2\">0.57 0.64</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |