|
{ |
|
"paper_id": "O08-3006", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:02:26.578249Z" |
|
}, |
|
"title": "Analyzing Information Retrieval Results With a Focus on Named Entities", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Mandl", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Hildesheim", |
|
"location": { |
|
"addrLine": "Marienburger Platz 22", |
|
"postCode": "31141", |
|
"settlement": "Hildesheim", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "mandl@uni-hildesheim.de" |
|
}, |
|
{ |
|
"first": "Christa", |
|
"middle": [], |
|
"last": "Womser-Hacker", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Hildesheim", |
|
"location": { |
|
"addrLine": "Marienburger Platz 22", |
|
"postCode": "31141", |
|
"settlement": "Hildesheim", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Experiments carried out within evaluation initiatives for information retrieval have been building a substantial resource for further detailed research. In this study, we present a comprehensive analysis of the data of the Cross Language Evaluation Forum (CLEF) from the years 2000 to 2004. Features of the topics are related to the detailed results of more than 100 runs. The analysis considers the performance of the systems for each individual topic. Named entities in topics revealed to be a major influencing factor on retrieval performance. They lead to a significant improvement of the retrieval quality in general and also for most systems and tasks. This knowledge, gained by data mining on the evaluation results, can be exploited for the improvement of retrieval systems as well as for the design of topics for future CLEF campaigns.", |
|
"pdf_parse": { |
|
"paper_id": "O08-3006", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Experiments carried out within evaluation initiatives for information retrieval have been building a substantial resource for further detailed research. In this study, we present a comprehensive analysis of the data of the Cross Language Evaluation Forum (CLEF) from the years 2000 to 2004. Features of the topics are related to the detailed results of more than 100 runs. The analysis considers the performance of the systems for each individual topic. Named entities in topics revealed to be a major influencing factor on retrieval performance. They lead to a significant improvement of the retrieval quality in general and also for most systems and tasks. This knowledge, gained by data mining on the evaluation results, can be exploited for the improvement of retrieval systems as well as for the design of topics for future CLEF campaigns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The Cross Language Evaluation Forum (CLEF) provides a forum for researchers in information retrieval and manages a testbed for mono-and cross-lingual information (CLIR) retrieval systems. CLEF allows the identification of successful approaches, algorithms, and tools in CLIR. Within CLEF, various strategies are employed in order to improve retrieval systems [Braschler and Peters 2004; di Nunzio et al. 2007] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 386, |
|
"text": "[Braschler and Peters 2004;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 409, |
|
"text": "di Nunzio et al. 2007]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We believe that the effort dedicated to large scale evaluation studies can be exploited beyond the optimization of individual systems. The amount of data created by organizers and participants remains a valuable source of knowledge awaiting exploration. Many lessons can still be learned from past data of evaluation initiatives such as CLEF, TREC [Voorhees and Buckland 2002] , INEX [Fuhr 2003 ], NTCIR [Oyama et al. 2003 ], or IMIRSEL [Downie 2003 ].", |
|
"cite_spans": [ |
|
{ |
|
"start": 348, |
|
"end": 376, |
|
"text": "[Voorhees and Buckland 2002]", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 394, |
|
"text": "[Fuhr 2003", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 422, |
|
"text": "[Oyama et al. 2003", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 449, |
|
"text": "[Downie 2003", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Ultimately, further criteria and metrics for the evaluation of search and retrieval methods may be found. This could lead to improved algorithms, quality criteria, resources, and tools in cross language information retrieval [Harman 2004; Schneider et al. 2004] . This general research approach is illustrated in Figure 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 238, |
|
"text": "[Harman 2004;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 239, |
|
"end": 261, |
|
"text": "Schneider et al. 2004]", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 321, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Topics are considered an essential component of experiments for information retrieval evaluation [Sparck Jones 1995] . In most evaluations, the variation between topics is larger than the variation between systems. The topic creation for a multilingual test environment requires special care in order to avoid cultural or linguistic bias influencing the semantics of topic formulations [Kluck and Womser-Hacker 2002] . It must be assured that each topic provides equal conditions as starting points for the systems. The question remains whether linguistic aspects randomly appearing within the topics have any influence on the retrieval performance. This is especially important, as we observed in some cases, as leaving out one topic from the CLEF campaign changes the ranking of the retrieval systems despite the fact that 50 topics are considered to be sufficiently reliable [Voorhees and Buckley 2002; Zobel 1998 ]. Most analysis of the data generated in CLEF is based on the average performance of the systems. This study concentrates on the retrieval quality of systems for individual topics. By identifying reasons for the failure of certain systems for some topics, these systems can be optimized. Our analysis identified a feature of the topics which can be exploited for future system improvement. In this study, we focused on the impact of named entities in topics and found a significant correlation with the average precision. Consequently, the goal of this study is twofold:", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 116, |
|
"text": "[Sparck Jones 1995]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 416, |
|
"text": "[Kluck and Womser-Hacker 2002]", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 878, |
|
"end": 905, |
|
"text": "[Voorhees and Buckley 2002;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 906, |
|
"end": 916, |
|
"text": "Zobel 1998", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "(a) to measure the effect of named entities on retrieval performance in CLEF (b) to optimize retrieval systems based on these results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Properties", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Named entities pose a potential challenge to cross language retrieval systems, because these systems often rely on machine translation of the query. The following problems may occur when trying to translate a named entity:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Properties", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 The named entity may be out of vocabulary for translation", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Properties", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Copying a named entity into the target language often does not help, as the name may be spelled differently (e.g. German: \"Gorbatschow\" vs. English: \"Gorbachev\") \u2022 A named entity can actually be translated (e.g. \"Smith\" could be interpreted as a name or a profession and as the latter, translated)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Properties", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Named entities are a feature which can be easily identified within queries. We consider the systems at CLEF as black boxes and have so far not undertaken any effort to analyze how these systems treat named entities and why that treatment may result in the effects we have observed. The data necessary for such an analysis is not provided by CLEF. The systems use very different approaches, tools and linguistic resources. Each system may treat the same named entity quite differently and successful retrieval may be due to a large number of factors like appropriate treatment as n-gram, proper translation by a translation service, or due to an entry in a linguistic resource. An analysis of the treatment of the named entities would lead merely to case studies. As a consequence, we find a statistical analysis of the overall effect as the appropriate research approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Properties", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The remainder of this paper is organized as follows. The next chapter provides a brief overview of the research on evaluation results and their validity. Chapter three describes the data for CLEF used in our study. In chapter four, the influence of named entities on the overall retrieval results are analyzed. Chapter five explores the relationship between named entities and the performance of individual systems. In chapter six, we show how the performance variation of systems due to named entities could be exploited for system optimization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Properties", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The validity of large-scale information retrieval experiments has been the subject of a considerable amount of research. Zobel concluded that the TREC (Text REtrieval Conference) experiments are reliable as far as the ranking of the systems is concerned [Zobel 1998 ]. Voorhees and Buckley have analyzed the reliability of experiments as a function of the size of the topic set [Voorhees and Buckley 2002] . They concluded that the typical size of the topic set of some 50 topics in TREC is sufficient for a satisfactory level of reliability.", |
|
"cite_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 265, |
|
"text": "[Zobel 1998", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 405, |
|
"text": "[Voorhees and Buckley 2002]", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Information Retrieval Evaluation Results", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Human judgments are necessary to evaluate the relevance of the documents. Relevance assessment is a very subjective task. Consequently, assessments by different jurors result in different sets of relevant documents. However, these different sets of relevant documents do not lead to different system rankings according to an empirical analysis [Voorhees 2000 ]. Thus, the subjectivity of the jurors does not call into question the validity of the evaluation results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 344, |
|
"end": 358, |
|
"text": "[Voorhees 2000", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Information Retrieval Evaluation Results", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Further research is dedicated toward the question of whether expensive human relevance judgments are necessary or whether the constructed document pool of the most highly ranked documents from all runs may serve as a valid approximation of the human judgments. According to a study by Cahan et al., the ranking of the systems in TREC correlates positively to a ranking based on the document pool without further human judgment [Cahan et al. 2001] . However, there are considerable differences in the ranking which are especially significant for the highest ranks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 427, |
|
"end": 446, |
|
"text": "[Cahan et al. 2001]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Information Retrieval Evaluation Results", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Another important aspect in evaluation studies is pooling. Not all submitted runs can be judged manually by jurors and relevant documents may remain undiscovered. Therefore, a pool of documents is built to which the systems contribute differently. In order to measure the potential effect of pooling, a study was conducted which calculated the final rankings of the systems by leaving out one run at a time ]. It shows that the effect is negligible and that the rankings remain stable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Information Retrieval Evaluation Results", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "However, our analysis shows that leaving out one topic during the result calculation changes the system ranking in most cases. It has also been noted that the differences between topics are larger than the differences between systems. This effect has been observed in TREC [Harman and Voorhees 1997] and also in CLEF [Gey 2001 ].", |
|
"cite_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 299, |
|
"text": "[Harman and Voorhees 1997]", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 326, |
|
"text": "[Gey 2001", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Information Retrieval Evaluation Results", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "For example, when looking at run EIT01M3N in the CLEF 2001 campaign, we see that it has a fairly good average precision of 0.341. However, for one topic (nr. 44), which had an average difficulty, this run performs far below (0.07) the average for that topic (0.27). An intellectual analysis of the topics revealed that two of the most difficult topics contained no proper names and that both topics were from the sports domain (Topic 51 and 54). This effect has been noted in many evaluations and also in CLEF [Hollink et al. 2004] . As a consequence, topics are an important part of the design in an evaluation initiative and need to be created very carefully.", |
|
"cite_spans": [ |
|
{ |
|
"start": 510, |
|
"end": 531, |
|
"text": "[Hollink et al. 2004]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Information Retrieval Evaluation Results", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Named entities seem to play an important role especially in multilingual information retrieval [Gey 2001 ]. This assumption is backed by experimental results. The influence of named entities on the retrieval performance is considerable. In an experiment, the removal of named entities from the topic decreased the quality considerably, whereas the use of named entities only in the query led to a much smaller decrease [Demner-Fushman and Oard 2003] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 104, |
|
"text": "[Gey 2001", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 419, |
|
"end": 449, |
|
"text": "[Demner-Fushman and Oard 2003]", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Information Retrieval Evaluation Results", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "A study for the CLEF campaign 2001 revealed no strong correlation between any single linguistic phenomenon and the system difficulty of a topic. Not even the length of a topic showed any substantial effect, except for named entities. However, the sum of all phenomena was correlated to the performance. The more linguistic phenomena available, the better the systems solved a topic on average [Mandl and Womser-Hacker 2003] . The availability of more variations of a word seems to provide stemming algorithms with more evidence for extraction of the stem, for example.", |
|
"cite_spans": [ |
|
{ |
|
"start": 393, |
|
"end": 423, |
|
"text": "[Mandl and Womser-Hacker 2003]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of Information Retrieval Evaluation Results", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The data for this study stems from the Cross Language Evaluation Forum (CLEF) Peters et al. 2004] . CLEF is a large evaluation initiative which is dedicated to cross-language retrieval for European languages. The setup is similar to the Text Retrieval Conference (TREC) [Harman and Voorhees 1997; Voorhees and Buckland 2002] . The main tasks for multilingual, ad-hoc retrieval are:", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 97, |
|
"text": "Peters et al. 2004]", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 296, |
|
"text": "[Harman and Voorhees 1997;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 324, |
|
"text": "Voorhees and Buckland 2002]", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entities in the Multi-lingual Topic Set", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "\u2022 The core and most important track is the multilingual task. The participants choose one topic language and need to retrieve documents in all main languages. The final result set needs to integrate documents from all languages ordered according to relevance regardless of their language. \u2022 The bilingual task requires the retrieval of documents different from the chosen topic language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entities in the Multi-lingual Topic Set", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "\u2022 The Monolingual task represents the traditional ad-hoc task in information retrieval and is allowed for some languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entities in the Multi-lingual Topic Set", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "All runs analyzed in this study are test runs based on topics for which no previous relevance judgments were known. For training runs, older topics can be used each year. Techniques and algorithms for cross-lingual and multilingual retrieval are described in the CLEF proceedings and are not the focus of this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entities in the Multi-lingual Topic Set", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The topic language of a run is the language which the system developers use to start the search and to construct their queries. The topic language needs to be stated by the participants and can be found in the appendix of the CLEF proceedings. The retrieval performance of the runs for the topics can also be extracted from the appendix of the CLEF proceedings Peters et al. 2004] . Most important, the average precision of each run for each topic can be retrieved.", |
|
"cite_spans": [ |
|
{ |
|
"start": 361, |
|
"end": 380, |
|
"text": "Peters et al. 2004]", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entities in the Multi-lingual Topic Set", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The topic creation for CLEF needs to assure that each topic is translated into all languages without modifying the content while providing equal chances for systems which start with different topic languages. Therefore, a thorough translation check of all translated topics in CLEF was performed to check if the translations to all languages resulted in the same meaning. Nevertheless, the topic generation process follows a natural method and avoids artificial constructions [Womser-Hacker 2002] . Figure 2 shows an exemplary topic from CLEF containing a named entity. The topic's structure is built up by a short title, a description with a few words and a so-called narrative with one or more sentences. Participants of CLEF have to declare which parts are used for retrieval.", |
|
"cite_spans": [ |
|
{ |
|
"start": 476, |
|
"end": 496, |
|
"text": "[Womser-Hacker 2002]", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 499, |
|
"end": 507, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Topic Creation Process", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "<top lang=\"ES\"> <num>C083</num> <ES-title> Subasta de objetos de Lennon. </ES-title> <ES-desc> Encontrar subastas p\u00fablicas de objetos de John Lennon.</ES-desc> <ES-narr> Los documentos relevantes hablan de subastas que incluyen objetos que pertenecieron a John Lennon, o que se atribuyen a John Lennon.</ES-narr> </top> <top> <num>C083</num> <FR-title> Vente aux ench\u00e8res de souvenirs de John Lennon </FR-title> <FR-desc> Trouvez les ventes aux ench\u00e8res publiques des souvenirs de John Lennon. </FR-desc> <FR-narr> Des documents pertinents d\u00e9criront les ventes aux ench\u00e8res qui incluent les objets qui ont appartenu \u00e0 John Lennon ou qui ont \u00e9t\u00e9 attribu\u00e9s \u00e0 John Lennon. </FR-narr> </top>", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Creation Process", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "An intellectual analysis of the results and the properties of the topics had identified named entities as a potential indicator of good retrieval performance. For that reason, named entities in the CLEF topic set were analyzed in more detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Named entities were intellectually assessed according a published schema [Sekine et al. 2002] . The analysis included all topics from the campaigns in the years 2000 through 2004. The number of named entities in each topic was assessed intellectually. We focused on English, Spanish, and German as topic languages and considered monolingual, bilingual, and multilingual tasks. Table 1 shows the overall number of named entities found in the topic sets. The extraction was done intellectually by graduate students. We also assessed in which parts of the topic the name occurred, whether found in the title, the description, or the narrative. This detailed analysis was not exploited further because very few runs use a source other than title plus description. In very few cases, the topic narrative includes additional named entities not already present in the title and the description. For our analysis, the sum of named entities in all three parts was used. We analyzed the topic set in three languages, and in some cases, differences between the number of named entities between two versions of a topic occur. These differences were considered. In 18 cases, a different number of named entities was assessed between German and English versions of topics 1 through 200, and in 49 cases, a difference was encountered between German and Spanish for topics 41 though 200. For example, topic 91 contains one named entity more for German because German has two potential abbreviations for United Nations (UN and UNO) and both are used.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 93, |
|
"text": "[Sekine et al. 2002]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 377, |
|
"end": 384, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The numbers given in Table 1 are based on the English versions of the topics and consider the number of types rather than tokens of named entities in title, description, and narrative together. The large number of named entities in the topic set shows their importance. Table 2 shows the number of runs within each task. For the analysis presented in chapter five, we divided the topics into three classes: (a) no named entities, (b) one or two named entities, and (c) three or more named entities. The distribution of topics over these three classes is also shown in Table 2 . It can be seen that the three classes are best balanced in CLEF 2002, whereas topics in the second class dominate in CLEF 2003.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 277, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 568, |
|
"end": 575, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Only topics for which no zero results were returned were considered for each sub-task. Since these topics differ between sub-tasks, there are slight differences between the numbers for each class even for one year. For further analysis, only tasks with more than eight runs were considered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our first goal was to measure whether named entities had any influence on the overall quality of the retrieval results. In order to measure this effect, we first calculated the correlation between the overall retrieval quality achieved for a topic and the number of named entities encountered in this topic. In the second section, this analysis is refined to single tasks and specific topic languages. First, we determined the overall performance in relation to the number of named entities in a topic. The 200 analyzed topics contain between zero and six named entities. For each number n of named entities, we determine the overall performance by two methods: (a) take the best run for each topic and (b) take the average of all runs for a topic. For both methods, we obtain a set of values for n named entities. Within each set, we can determine the maximum, the average, and the minimum. For example, we determine for method (a) the following values: best topic for n named entities, average of all topics for n named entities, and worst topic among all topics with n named entities. The last value gives the performance for the most difficult topic within the set of topics containing n named entities. The maximum of the best runs is in most cases 1.0 and is, therefore, omitted. The following Tables 3 and 4 show these values for CLEF overall. Figures 3 and 4 show detailed analysis for specific tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1300, |
|
"end": 1314, |
|
"text": "Tables 3 and 4", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 1351, |
|
"end": 1366, |
|
"text": "Figures 3 and 4", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Named Entities and General Retrieval Performance", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The CLEF campaign contains relatively few topics with four or more named entities. The results for these values are, consequently, not significant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 3. Method a: Average precision for topics with n named entities for CLEF 2002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It can be seen that topics with more named entities are generally solved better by the systems. This observation can be confirmed by statistical analysis. The average performance correlates to the number of named entities with a value of 0.43 and the best performance with a value of 0.26. Both correlation values are statistically significant at a level of 95%. With one exception, the worst performing category is always the one without any named entities. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 3. Method a: Average precision for topics with n named entities for CLEF 2002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The correlation analysis was also carried out for the individual retrieval tasks or tracks. This can be done by (a) calculating the average precision for each topic achieved within a task, by (b) taking the maximum performance for each topic (taking the maximum average precision that one run achieved for that topic), and by (c) calculating the correlation between named entities and average precision for each run individually and taking the average for all runs within a task. Both measures a and b are presented in Table 5 . Except for one task (multilingual with topic language English in 2001), all observed correlations are positive. Thus, the overall effect occurs within most tasks and even within most single runs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 519, |
|
"end": 526, |
|
"text": "Table 5", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Correlation for Individual Tasks and Topic Languages", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "There is no difference in the average strength of the correlation for German (0.27) and English (0.28) as topic language. The average for each language in the last column shows a more significant difference. The correlation is stronger for German (0.19) than for English (0.15) as topic language. Furthermore, there is a considerable difference between the average correlation for the bilingual (0.35) and multilingual run types (0.22). This could be a hint that the observed positive effect of named entities on retrieval quality is smaller for multilingual retrieval. It needs to be stressed, though, that the effect does not only occur for systems with overall poor performance. Rather, it can be observed in the top ranked runs as well. Figure 5 shows the strength of the correlation for all runs in one task. The runs are ordered according to their average precision. The correlation between the systems MAP for a topic and the number of named entities present in that topic is also shown in Figure 5 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 741, |
|
"end": 749, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 997, |
|
"end": 1005, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Correlation for Individual Tasks and Topic Languages", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this chapter, we show that the systems tested at CLEF perform differently for topics with different numbers of named entities. Although proper names make topics easier in general, and for almost all runs, the performance of systems varies within the three classes of topics based on the number of named entities. As already mentioned, we distinguished three classes of topics: (a) the first class without proper names (called \"none\"), (b) the second class with one or two named entities (called \"few\"), and (c) a third class with three or more named entities (called \"lots\"). This approach is suitable for implementation and allows the categorization before the experiments and the relevance assessment. It requires no intellectual intervention but, solely, a named entity recognition system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion Performance Variation of Systems for Named Entities", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "As we can see in Table 2 , the three categories are well balanced for the CLEF campaign in 2002. For 2003, there are only few topics in the first and second categories. Therefore, the average ranking is extremely similar to the ranking for the second class \"few\". Figure 5 shows that the correlation between average precision and the number of named entities is quite different for all runs for one exemplary task. The runs in Figure 6 according to the original ranking in the task. We observe a slightly decreasing sensitivity for named entities with higher system performance. However, the correlation is still substantial and sometimes still high for top runs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 24, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 272, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 435, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Variation of System Performance", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "A look at the individual runs shows large differences between the three categories. We show the values for three tasks in Figure 6 . The curve for many named entities lies mostly above the average curve, whereas the average precision for the class none without named entities in most cases remains below the overall average. Sometimes, even the best runs perform quite differently for the three categories. Other runs perform similarly for all three categories. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 130, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Variation of System Performance", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The performance variation within the classes leads to different system rankings for the classes. An evaluation campaign including, for example, only topics without named entities may lead to different rankings. To analyze this effect, we determined the rankings for all runs within each named entity class, none, few, and lots. Table 6 shows that the system rankings can be quite different for the three classes. The difference is measured with the Pearson rank correlation coefficient.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 328, |
|
"end": 335, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Correlation of System Rankings", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For most tracks, the original average system ranking is most similar to the ranking based only on the topics with one or two named entities. For the first and second categories, the rankings are more dissimilar. The ranking for the top ten systems in the classes usually differs more from the original ranking. This is due to minor performance differences between top runs. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Correlation of System Rankings", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The patterns of the systems are strikingly different for the three classes. As a consequence, there seems to be potential for the combination or fusion of systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimization by Fusion Based on Named Entities", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "We propose the following simple fusion rule. For each topic, the number of named entities is determined. Subsequently, this topic is channeled into the system with the best performance for this named entity class. The best system is a combination of at most three runs. Each category of topics is answered by the optimal system for that number of named entities. By simply choosing the best performing system for each topic, we can also determine a practical upper level for the performance of the retrieval systems. This upper level can give a hint about how much of the potential for improvement is exploited by an approach. Table 6 shows the optimal performance and the improvement by the fusion based on the optimal selection of a system for each category of topics.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 627, |
|
"end": 634, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Optimization by Fusion Based on Named Entities", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "The highest levels of improvement are achieved for the topic language English. For the year 2002, we observe the highest improvement of 10% for the bilingual runs. For this task, there is also the highest figure for potential, 53%. Figure 7 shows the results of the optimization. The previous analysis showed that our fusion approach has the potential to boost even top runs. Consequently, this technique may also be beneficial for lower-ranked runs. We applied the optimization through fusion for all runs. In the ordering of all runs according to the average precision (original CLEF ranking), we chose a window of three and five neighboring runs. From these three to five runs, we chose the best results for each of the three classes of number of proper names (none, few, or lots). Again, the best run for each class is chosen and contributes to the fusion result. Table 6 shows the average improvement for this fusion technique. This analysis shows that the performance of retrieval systems can be optimized by channeling topics to the systems best appropriated for topics with none, one or two and three and more proper names. Certainly, the application of this fusion on the past results approach is artificial and, in our study, the number of named entities was determined intellectually. However, this mechanism can be easily implemented by using an automatic named entity recognizer.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 240, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF7" |
|
}, |
|
{ |
|
"start": 868, |
|
"end": 875, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Optimization by Fusion Based on Named Entities", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "So far, our studies have been focused on the language of the initial topic which participants used for their retrieval efforts. Additionally, we have analyzed the effect of the target or document language. In this case, we cannot consider the multilingual tasks where there are several target languages. However, the monolingual tasks have already been analyzed and are also considered here. The additional analysis is targeted at bilingual retrieval tasks. We grouped all bilingual runs with English, German, and Spanish as document languages. The correlation between the number of named entities in the topics and the average precision of all systems for that topic was calculated. The average precision may be interpreted as the difficulty of the topic. Table 8 shows the results of this analysis. First, we can see a positive correlation for all tasks considered. Named entities support the retrieval also from the perspective of the document language. These results for the year 2002 may be a hint that retrieval in English or German document collections profits more from named entities in the topic than Spanish. However, in 2003, the opposite is the case and English and Spanish switch. For German, there are only 3 runs in 2003. As a consequence, we cannot yet detect any language dependency for the effect of named entities on retrieval performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 757, |
|
"end": 764, |
|
"text": "Table 8", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Named Entities in Topics and Retrieval Performance for Target Languages", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "Research on failure and success stories for individual topics is a promising strategy for the analysis of information retrieval results. Several current research initiatives are focusing on this strategy and are looking at retrieval results beyond average precision [Harman 2004; SIGIR 2005 query difficulty workshop] . We identified named entities in topics as one transparent predictor in multi-and mono-lingual retrieval. Further analysis on named entities should also take the frequency and distribution of the named entities in the corpora into account.", |
|
"cite_spans": [ |
|
{ |
|
"start": 266, |
|
"end": 279, |
|
"text": "[Harman 2004;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 280, |
|
"end": 317, |
|
"text": "SIGIR 2005 query difficulty workshop]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resume", |
|
"sec_num": "8." |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Using part-of-speech Patterns to Reduce Query Ambiguity", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Allan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Raghavan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval (SIGIR \u00b402)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "307--314", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allan, J., and H. Raghavan, \"Using part-of-speech Patterns to Reduce Query Ambiguity,\" In Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval (SIGIR \u00b402), Tampere, Finland, Aug. 11-15, 2002, pp. 307-314.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Evaluation of Cross-Language Information Retrieval Systems. Third Workshop of the Cross Language Evaluation Forum CLEF 2003)", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Braschler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Lecture Notes in Computer Science)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Braschler, M., \"CLEF 2002 -Overview of Results,\" Evaluation of Cross-Language Information Retrieval Systems. Third Workshop of the Cross Language Evaluation Forum CLEF 2003), Trondheim. Springer (Lecture Notes in Computer Science).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Cross-Language Evaluation Forum: Objectives, Results, Achievements", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Braschler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Information Retrieval", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "7--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Braschler, M., and C. Peters, \"Cross-Language Evaluation Forum: Objectives, Results, Achievements,\" Information Retrieval, 2004, 7, pp. 7-31.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Ranking Retrieval Systems without Relevance Judgments", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Cahan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Nicholas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Soboroff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval (SIGIR '01)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--73", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cahan, P., C. Nicholas, and I. Soboroff, \"Ranking Retrieval Systems without Relevance Judgments,\" In Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval (SIGIR '01), New Orleans, USA, Sep. 9-13, 2001, pp. 66-73.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Predicting Query Ambiguity", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Cronen-Townsend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Annual Intl. ACM Conference on Research and Development in Information Retrieval (SIGIR '02)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "299--306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cronen-Townsend, S., Y. Zhou, and B. Croft, \"Predicting Query Ambiguity,\" In Proceedings of the Annual Intl. ACM Conference on Research and Development in Information Retrieval (SIGIR '02), Tampere, Finland, 2002, pp. 299-306.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The effect of bilingual term list size on dictionary-based crosslanguage information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Demner-Fushman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Oard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Thirty-Sixth Hawaii International Conference on System Sciences", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Demner-Fushman, D., and D. Oard, \"The effect of bilingual term list size on dictionary-based crosslanguage information retrieval,\" In Thirty-Sixth Hawaii International Conference on System Sciences, (Hawaii, Jan 6-9, 2003).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Revised Selected Papers", |
|
"authors": [ |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "Nunzio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Ferro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mandl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Berlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Evaluation of Multilingual and Multi-modal Information Retrieval. 7 th Workshop of the Cross-Language Evaluation Forum", |
|
"volume": "4730", |
|
"issue": "", |
|
"pages": "21--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Di Nunzio, G., N. Ferro, T. Mandl, and C. Peters, \"CLEF 2006: Ad Hoc Track Overview,\" In Evaluation of Multilingual and Multi-modal Information Retrieval. 7 th Workshop of the Cross-Language Evaluation Forum, (CLEF 2006), Alicante, Spain, Revised Selected Papers. Berlin et al.: Springer (Lecture Notes in Computer Science 4730) 2007, pp. 21-34.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Using Temporal Profiles of Queries for Precision Prediction", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Diaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval (SIGIR '04)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "18--24", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diaz, F., and R. Jones, \"Using Temporal Profiles of Queries for Precision Prediction,\" In Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval (SIGIR '04), Sheffield, UK, 2004, pp. 18-24.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Toward the Scientific Evaluation of Music Information Retrieval Systems", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Downie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "International Symposium on Music Information Retrieval (ISMIR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Downie, S., \"Toward the Scientific Evaluation of Music Information Retrieval Systems,\" In International Symposium on Music Information Retrieval (ISMIR), Washington, D.C., and Baltimore, USA 2003, http://ismir2003.ismir.net/papers/Downie.PDF.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Topic Structure Modeling", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Shanahan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Sheftel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Annual Intl. ACM Conference on Research and Development in Information Retrieval (SIGIR '02)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "417--418", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evans, D., J. Shanahan, and V. Sheftel, \"Topic Structure Modeling,\" In Proceedings of the Annual Intl. ACM Conference on Research and Development in Information Retrieval (SIGIR '02), Tampere, Finland, 2002, pp. 417-418.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Initiative for the Evaluation of XML Retrieval (INEX", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Fuhr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "INEX 2003 Workshop Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fuhr, N., Initiative for the Evaluation of XML Retrieval (INEX): INEX 2003 Workshop Proceedings, Dagstuhl, Germany, December 15-17, 2003. http://purl.oclc.org/NET/duett-07012004-093151.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Research to improve Cross-Language Retrieval. Position Paper for CLEF", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Gey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Berlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Cross-Language Information Retrieval and Evaluation. Workshop of Cross-Language Evaluation Forum (CLEF 2000)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "83--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gey, F., \"Research to improve Cross-Language Retrieval. Position Paper for CLEF,\" In Cross-Language Information Retrieval and Evaluation. Workshop of Cross-Language Evaluation Forum (CLEF 2000), Lisbon, Portugal, September 21-22, 2000). Berlin et al.: Springer [LNCS 2069] 2001, pp. 83-88.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Multilingual Retrieval Experiments with MIMOR at the University of Hildesheim", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Hackl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "K\u00f6lle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mandl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ploedt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J.-H", |
|
"middle": [], |
|
"last": "Scheufen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Womser-Hacker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Evaluation of Cross-Language Information Retrieval Systems. Proceedings CLEF", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hackl, R., R. K\u00f6lle, T. Mandl, A. Ploedt, J.-H. Scheufen, and C. Womser-Hacker, \"Multilingual Retrieval Experiments with MIMOR at the University of Hildesheim,\" In Evaluation of Cross-Language Information Retrieval Systems. Proceedings CLEF 2003", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Revised Selected Papers", |
|
"authors": [ |
|
{ |
|
"first": "Trondheim", |
|
"middle": [], |
|
"last": "Workshop", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ";", |
|
"middle": [], |
|
"last": "Norway", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Berlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "LNCS", |
|
"volume": "3237", |
|
"issue": "", |
|
"pages": "166--173", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Workshop, Trondheim, Norway, Revised Selected Papers. Berlin et al.: Springer [LNCS 3237] 2004, pp. 166-173.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "SIGIR 2004 Workshop. RIA and Where can we go from here?", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Harman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "ACM SIGIR Forum", |
|
"volume": "38", |
|
"issue": "2", |
|
"pages": "45--49", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harman, D., \"SIGIR 2004 Workshop. RIA and Where can we go from here?,\" ACM SIGIR Forum, 38(2), pp. 45-49.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Overview of the Sixth Text REtrieval Conference", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Harman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Voorhees", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "The Sixth Text REtrieval Conference (TREC-6)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harman, D., and E. Voorhees, \"Overview of the Sixth Text REtrieval Conference,\" In The Sixth Text REtrieval Conference (TREC-6), National Institute of Standards and Technology, Gaithersburg, Maryland, 1997, http://trec.nist.gov/pubs/.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Monolingual Document Retrieval for European Languages", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Hollink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kamps", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "De Rijke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Information Retrieval", |
|
"volume": "7", |
|
"issue": "1-2", |
|
"pages": "33--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hollink, V., J. Kamps, C. Monz, and M. de Rijke, \"Monolingual Document Retrieval for European Languages,\" Information Retrieval, 7(1-2), pp. 33-52.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Inside the Evaluation Process of the Cross-Language Evaluation Forum (CLEF): Issues of Multilingual Topic Creation and Multilingual Relevance Assessment", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kluck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Womser-Hacker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Third International Conference on Language Resources and Evaluation (LREC 2002)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "573--576", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kluck, M., and C. Womser-Hacker, \"Inside the Evaluation Process of the Cross-Language Evaluation Forum (CLEF): Issues of Multilingual Topic Creation and Multilingual Relevance Assessment,\" In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC 2002), Las Palmas de Gran Canaria, Spain, May 29-31, 2002, ELRA, Paris, 2002, pp. 573-576.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Predictive Caching and Prefetching of Query Results in Search Engines", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Lempel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Twelfth International World Wide Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lempel, R., and S. Moran, \"Predictive Caching and Prefetching of Query Results in Search Engines,\" in Proceedings of the Twelfth International World Wide Web Conference (WWW 2003), Budapest, Hungary, May 20-24, 2003. pp. 19-28.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Linguistic and Statistical Analysis of the CLEF Topics", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mandl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Womser-Hacker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Advances in Cross-Language Information Retrieval: Third Workshop of the Cross-Language Evaluation Forum", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mandl, T., and C. Womser-Hacker, \"Linguistic and Statistical Analysis of the CLEF Topics,\" In Advances in Cross-Language Information Retrieval: Third Workshop of the Cross-Language Evaluation Forum (CLEF 2002), Rome, Italy, September 19-20, 2002", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A Framework for long-term Learning of Topical User Preferences in Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mandl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Womser-Hacker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "New Library World", |
|
"volume": "105", |
|
"issue": "5", |
|
"pages": "184--195", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mandl, T., and C. Womser-Hacker, \"A Framework for long-term Learning of Topical User Preferences in Information Retrieval,\" New Library World, 105(5/6), pp. 184-195.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "NTCIR Workshop 3: Proceedings of the Third NTCIR Workshop on research in Information Retrieval, Automatic Text Summarization and Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Oyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Ishida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Kando", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oyama, K., E. Ishida, and N. Kando, NTCIR Workshop 3: Proceedings of the Third NTCIR Workshop on research in Information Retrieval, Automatic Text Summarization and Question Answering, Tokio, 2003. http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings3/index.html", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Evaluation of Cross-Language Information Retrieval Systems", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Braschler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Gonzalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kluck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rome", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Berlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Third Workshop of the Cross Language Evaluation Forum", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peters, C., M. Braschler, J. Gonzalo, and M. Kluck, \"Evaluation of Cross-Language Information Retrieval Systems,\" In Third Workshop of the Cross Language Evaluation Forum (CLEF 2002), Rome. Berlin et al.: Springer (Lecture Notes in Computer Science 2785) 2003.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Comparative Evaluation of Multilingual Information Access Systems", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Gonzalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Braschler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kluck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "4 th Workshop of the Cross-Language Evaluation Forum (CLEF 2003)", |
|
"volume": "3237", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peters, C., J. Gonzalo, M. Braschler, and M. Kluck, \"Comparative Evaluation of Multilingual Information Access Systems,\" In 4 th Workshop of the Cross-Language Evaluation Forum (CLEF 2003), Trondheim, Norway, August 21-22, 2003, Revised Selected Papers. Springer Lecture Notes in Computer Science 3237, 2004.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Workshop LECLIQ: Lessons Learned from Evaluation: Towards Integration and Transparency in Cross-Lingual Information Retrieval with a special Focus on Quality Gates", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mandl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Womser-Hacker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Workshop Lessons Learned from Evaluation: Towards Transparency and Integration in Cross-Lingual Information Retrieval (LECLIQ)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--4", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schneider, R., T. Mandl, and C. Womser-Hacker, \"Workshop LECLIQ: Lessons Learned from Evaluation: Towards Integration and Transparency in Cross-Lingual Information Retrieval with a special Focus on Quality Gates,\" In 4 th Intl Conference on Language Resources and Evaluation (LREC), Lisbon, Portugal, May 24-30, 2004. Workshop Lessons Learned from Evaluation: Towards Transparency and Integration in Cross-Lingual Information Retrieval (LECLIQ), pp. 1-4.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Extended Named Entity Hierarchy", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Nobata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of Third International Conference on Language Resources and Evaluation (LREC 2002)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sekine, S., K. Sudo, and C. Nobata, \"Extended Named Entity Hierarchy,\" In: Proceedings of Third International Conference on Language Resources and Evaluation (LREC 2002), Las Palmas, Canary Islands, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Ranking retrieval systems without relevance judgements", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Soboroff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Nicholas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Cahan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 24th Annual International ACM SIGIR Conference of Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--73", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soboroff, I., C. Nicholas, and P. Cahan, \"Ranking retrieval systems without relevance judgements,\" In Proceedings of the 24th Annual International ACM SIGIR Conference of Research and Development in Information Retrieval, New Orleans, pp. 66-73.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Reflections on TREC", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Sparck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Information Processing and Management", |
|
"volume": "31", |
|
"issue": "3", |
|
"pages": "291--314", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sparck, J.K., \"Reflections on TREC,\" Information Processing and Management, 31(3), pp. 291-314.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "The Effect of Topic Set Size on Retrieval Experiment Error", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Voorhees", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Buckley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval (SIGIR '02)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "316--323", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Voorhees, E., and C. Buckley, \"The Effect of Topic Set Size on Retrieval Experiment Error,\" In Proceedings of the Annual International ACM Conference on Research and Development in Information Retrieval (SIGIR '02), Tampere, Finland, Aug. 11-15, 2002, ACM Press, pp. 316-323.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Variations in Relevance Judgements and the Measurement of Retrieval Effectiveness", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Voorhees", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Information Processing and Management", |
|
"volume": "36", |
|
"issue": "5", |
|
"pages": "679--716", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Voorhees, E., \"Variations in Relevance Judgements and the Measurement of Retrieval Effectiveness,\" Information Processing and Management, 36(5), 2000, pp. 679-716.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "The Eleventh Text Retrieval Conference", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Voorhees", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Buckland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Voorhees, E., and L. Buckland, The Eleventh Text Retrieval Conference (TREC 2002), National Institute of Standards and Technology, Gaithersburg, Maryland. Nov. 2002. http://trec.nist.gov/pubs/trec11/t11_proceedings.html.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Multilingual Topic Generation within the CLEF 2001 Experiments", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Womser-Hacker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Evaluation of Cross-Language Information Retrieval Systems: Second Workshop of the Cross-Language Evaluation Forum (CLEF 2001), Peters C.; Braschler M", |
|
"volume": "2406", |
|
"issue": "", |
|
"pages": "389--393", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Womser-Hacker, C., \"Multilingual Topic Generation within the CLEF 2001 Experiments,\" In Evaluation of Cross-Language Information Retrieval Systems: Second Workshop of the Cross-Language Evaluation Forum (CLEF 2001), Peters C.; Braschler M.; Gonzalo, J. and Kluck, Michael (Eds.). 2002, Darmstadt, Germany, September 3-4, 2001. Springer, LNCS 2406, pp. 389-393.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "How Reliable are the Results of Large-Scale Information Retrieval Experiments?", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zobel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 21 st Annual International ACM Conference on Research and Development in Information Retrieval (SIGIR '98)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "307--314", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zobel, J., \"How Reliable are the Results of Large-Scale Information Retrieval Experiments?\" In Proceedings of the 21 st Annual International ACM Conference on Research and Development in Information Retrieval (SIGIR '98), Melbourne, Australia, 1998, pp. 307-314.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "General overview of the research approach." |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Method b: Relation between system performance and the number of named entities in CLEF 2002" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Correlation between named entities and performance for runs in CLEF 2002 (task bilingual, topic language English)" |
|
}, |
|
"FIGREF5": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Figure 5 shows that the correlation between average precision and the number of named entities is quite different for all runs for one exemplary task. The runs in Figure 6 are ordered" |
|
}, |
|
"FIGREF6": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Performance variation of runs in CLEF 2002 (task bilingual, topic language English) depending on number of named entities in topic" |
|
}, |
|
"FIGREF7": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Optimization potential of named entity based fusion" |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>CLEF</td><td colspan=\"2\">Number of</td><td colspan=\"2\">Total number of</td><td colspan=\"2\">Average number of</td><td>Standard deviation of</td></tr><tr><td>year</td><td>topics</td><td/><td colspan=\"2\">named entities</td><td colspan=\"2\">named entities in topics</td><td>named entities in topics</td></tr><tr><td>2000</td><td>40</td><td/><td>52</td><td/><td>1.14</td><td/><td>1.12</td></tr><tr><td>2001</td><td>50</td><td/><td>60</td><td/><td>1.20</td><td/><td>1.06</td></tr><tr><td>2002</td><td>50</td><td/><td>86</td><td/><td>1.72</td><td/><td>1.54</td></tr><tr><td>2003</td><td>60</td><td/><td>97</td><td/><td>1.62</td><td/><td>1.18</td></tr><tr><td>2004</td><td>50</td><td/><td>72</td><td/><td>1.44</td><td/><td>1.30</td></tr><tr><td colspan=\"7\">Table 2. Overview of named entities in CLEF tasks</td></tr><tr><td>CLEF year</td><td>Task</td><td colspan=\"2\">Topic language</td><td>Nr. runs</td><td>Topics without named entities</td><td colspan=\"2\">Topics with one or two named entities</td><td>Topics with more than three named entities</td></tr><tr><td>2001</td><td>Bi</td><td colspan=\"2\">German</td><td>9</td><td>16</td><td>24</td><td>7</td></tr><tr><td colspan=\"2\">2001 Multi</td><td colspan=\"2\">German</td><td>5</td><td>16</td><td>24</td><td>7</td></tr><tr><td>2001</td><td>Bi</td><td colspan=\"2\">English</td><td>3</td><td>16</td><td>24</td><td>7</td></tr><tr><td colspan=\"2\">2001 Multi</td><td colspan=\"2\">English</td><td>17</td><td>17</td><td>26</td><td>7</td></tr><tr><td colspan=\"4\">2002 Mono German</td><td>21</td><td>12</td><td>21</td><td>17</td></tr><tr><td colspan=\"4\">2002 Mono Spanish</td><td>28</td><td>11</td><td>18</td><td>21</td></tr><tr><td>2002</td><td>Bi</td><td colspan=\"2\">German</td><td>4</td><td>12</td><td>21</td><td>17</td></tr><tr><td colspan=\"2\">2002 Multi</td><td colspan=\"2\">German</td><td>4</td><td>12</td><td>21</td><td>17</td></tr><tr><td>2002</td><td>Bi</td><td colspan=\"2\">English</td><td>51</td><td>14</td><td>21</td><td>15</td></tr><tr><td colspan=\"2\">2002 Multi</td><td colspan=\"2\">English</td><td>32</td><td>14</td><td>21</td><td>15</td></tr><tr><td colspan=\"4\">2003 Mono Spanish</td><td>38</td><td>6</td><td>33</td><td>21</td></tr><tr><td colspan=\"2\">2003 Multi</td><td colspan=\"2\">Spanish</td><td>10</td><td>6</td><td>33</td><td>21</td></tr><tr><td colspan=\"4\">2003 Mono German</td><td>30</td><td>9</td><td>40</td><td>10</td></tr><tr><td>2003</td><td>Bi</td><td colspan=\"2\">German</td><td>24</td><td>9</td><td>40</td><td>10</td></tr><tr><td>2003</td><td>Bi</td><td colspan=\"2\">English</td><td>8</td><td>9</td><td>41</td><td>10</td></tr><tr><td colspan=\"2\">2003 Multi</td><td colspan=\"2\">English</td><td>74</td><td>9</td><td>41</td><td>10</td></tr><tr><td colspan=\"2\">2004 Multi</td><td colspan=\"2\">English</td><td>34</td><td>16</td><td>23</td><td>11</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Number of named entities</td><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr><tr><td>Number of Topics</td><td colspan=\"5\">42 43 40 20 9</td><td>4</td></tr><tr><td>Average of Best System per Topic</td><td colspan=\"6\">0.62 0.67 0.76 0.83 0.79 0.73</td></tr><tr><td>Minimum of Best System per Topic</td><td colspan=\"6\">0.09 0.12 0.04 0.28 0.48 0.40</td></tr><tr><td colspan=\"7\">Standard Deviation of Best System per Topic 0.24 0.24 0.24 0.18 0.19 0.29</td></tr><tr><td colspan=\"7\">Table 4. Method b: Average precision of runs in relation to the number of</td></tr><tr><td>named entities in the topic</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Number of named entities</td><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr><tr><td>Number of Topics</td><td colspan=\"5\">42 43 40 20 9</td><td>4</td></tr><tr><td colspan=\"7\">Minimum of Average Performance per Topic 0.02 0.04 0.01 0.10 0.17 0.20</td></tr><tr><td>Average of Average Performance per Topic</td><td colspan=\"6\">0.20 0.25 0.36 0.40 0.31 0.40</td></tr><tr><td colspan=\"7\">Maximum of Average Performance per Topic 0.54 0.61 0.78 0.76 0.58 0.60</td></tr><tr><td colspan=\"7\">Standard Deviation of Average Performance 0.14 0.15 0.18 0.17 0.14 0.19</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>CLEF year</td><td>Run type</td><td>Topic language</td><td>Num-ber of runs</td><td>(a) Correlation of average precision per topic to number of NEs</td><td>Level of statistical significance (t-distribution) for prev. column</td><td>(b) Correlation of max. precision per topic to nr. of NEs</td></tr><tr><td>2001</td><td>Bilingual</td><td>German</td><td>9</td><td>0.44</td><td>-</td><td>0.32</td></tr><tr><td colspan=\"2\">2001 Multilingual</td><td>German</td><td>5</td><td>0.19</td><td>-</td><td>0.24</td></tr><tr><td>2001</td><td>Bilingual</td><td>English</td><td>3</td><td>0.20</td><td>-</td><td>0.13</td></tr><tr><td colspan=\"2\">2001 Multilingual</td><td>English</td><td>17</td><td>-0.34</td><td>-</td><td>-0.36</td></tr><tr><td>2002</td><td>Bilingual</td><td>German</td><td>4</td><td>0.33</td><td>-</td><td>0.25</td></tr><tr><td colspan=\"2\">2002 Multilingual</td><td>German</td><td>4</td><td>0.43</td><td>-</td><td>0.41</td></tr><tr><td>2002</td><td>Bilingual</td><td>English</td><td>51</td><td>0.40</td><td>99%</td><td>0.36</td></tr><tr><td colspan=\"2\">2002 Multilingual</td><td>English</td><td>32</td><td>0.29</td><td>-</td><td>0.37</td></tr><tr><td colspan=\"2\">2002 Monolingual</td><td>German</td><td>21</td><td>0.45</td><td>95%</td><td>0.34</td></tr><tr><td colspan=\"2\">2002 Monolingual</td><td>Spanish</td><td>28</td><td>0.21</td><td>-</td><td>0.27</td></tr><tr><td>2003</td><td>Bilingual</td><td>German</td><td>24</td><td>0.21</td><td>-</td><td>0.10</td></tr><tr><td>2003</td><td>Bilingual</td><td>English</td><td>8</td><td>0.41</td><td>-</td><td>0.47</td></tr><tr><td colspan=\"2\">2003 Multilingual</td><td>English</td><td>74</td><td>0.31</td><td>99%</td><td>0.27</td></tr><tr><td colspan=\"2\">2003 Monolingual</td><td>German</td><td>30</td><td>0.37</td><td>95%</td><td>0.28</td></tr><tr><td colspan=\"2\">2003 Monolingual</td><td>Spanish</td><td>38</td><td>0.39</td><td>99%</td><td>0.33</td></tr><tr><td colspan=\"2\">2003 Monolingual</td><td>English</td><td>11</td><td>0.16</td><td>-</td><td>0.24</td></tr><tr><td colspan=\"2\">2003 Multilingual</td><td>Spanish</td><td>10</td><td>0.21</td><td>-</td><td>0.31</td></tr><tr><td colspan=\"2\">2004 Multilingual</td><td>English</td><td>34</td><td>0.33</td><td>95%</td><td>0.34</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "These findings are not always statistically significant because each category contains only few topics. As stated by Buckley and Voorhees, some 50 topics are necessary to create a reliable ranking [Buckley and.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td>Sub-Task</td><td/><td/><td>Topic sub-set</td><td/></tr><tr><td>CLEF year</td><td>Run type</td><td>Topic language</td><td>Number of runs</td><td colspan=\"3\">No NEs few NEs lots NEs</td></tr><tr><td>2001</td><td>Bilingual</td><td>German</td><td>9</td><td>0.92</td><td>0.93</td><td>0.92</td></tr><tr><td colspan=\"2\">2001 Multilingual</td><td>English</td><td>17</td><td>0.98</td><td>0.93</td><td>0.75</td></tr><tr><td>2002</td><td>Bilingual</td><td>English</td><td>51</td><td>0.88</td><td>0.93</td><td>0.74</td></tr><tr><td colspan=\"2\">2002 Multilingual</td><td>English</td><td>32</td><td>0.94</td><td>0.99</td><td>0.98</td></tr><tr><td>2003</td><td>Bilingual</td><td>German</td><td>24</td><td>0.81</td><td>0.99</td><td>0.91</td></tr><tr><td colspan=\"2\">2002 Multilingual</td><td>English</td><td>74</td><td>0.86</td><td>1.00</td><td>0.93</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>CLEF year</td><td>Run type</td><td>Topic language</td><td>Average precision best run</td><td>Optimal average precision name fusion</td><td>Improve-ment over best run</td><td>Practical optimal average precision.</td><td>Improve-ment over best run</td></tr><tr><td>2001</td><td>Bilingual</td><td>German</td><td>0.509</td><td>0.518</td><td>2%</td><td>0.645</td><td>27%</td></tr><tr><td colspan=\"3\">2001 Multilingual English</td><td>0.405</td><td>0.406</td><td>0%</td><td>0.495</td><td>22%</td></tr><tr><td>2002</td><td>Bilingual</td><td>English</td><td>0.4935</td><td>0.543</td><td>10%</td><td>0.758</td><td>53%</td></tr><tr><td colspan=\"3\">2002 Multilingual English</td><td>0.378</td><td>0.403</td><td>6.5%</td><td>0.456</td><td>21%</td></tr><tr><td>2003</td><td>Bilingual</td><td>German</td><td>0.460</td><td>0.460</td><td>0%</td><td>0.622</td><td>35%</td></tr><tr><td>2003</td><td>Bilingual</td><td>English</td><td>0.348</td><td>0.369</td><td>6.1%</td><td>0.447</td><td>28%</td></tr><tr><td colspan=\"3\">2003 Multilingual English</td><td>0.438</td><td>0.443</td><td>1.2%</td><td>0.568</td><td>30%</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">CLEF year Task type</td><td>Target language</td><td>Number of runs</td><td>Correlation between number of named entities and average precision</td></tr><tr><td>2003</td><td>Mono</td><td>English</td><td>11</td><td>0.158</td></tr><tr><td>2002</td><td>Bi</td><td>English</td><td>16</td><td>0.577</td></tr><tr><td>2003</td><td>Bi</td><td>English</td><td>15</td><td>0.187</td></tr><tr><td>2002</td><td>Mono</td><td>German</td><td>21</td><td>0.372</td></tr><tr><td>2003</td><td>Mono</td><td>German</td><td>30</td><td>0.449</td></tr><tr><td>2002</td><td>Bi</td><td>German</td><td>13</td><td>0.443</td></tr><tr><td>2003</td><td>Bi</td><td>German</td><td>3</td><td>0.379</td></tr><tr><td>2002</td><td>Mono</td><td>Spanish</td><td>28</td><td>0.385</td></tr><tr><td>2003</td><td>Mono</td><td>Spanish</td><td>38</td><td>0.207</td></tr><tr><td>2002</td><td>Bi</td><td>Spanish</td><td>16</td><td>0.166</td></tr><tr><td>2003</td><td>Bi</td><td>Spanish</td><td>25</td><td>0.427</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |