Watheq's picture
Add readme
9871421
|
raw
history blame
2.17 kB

Scores of generated queries

This repo contains the scores files pertaining to this study. In particular, we scored the expansion queries generated by T5-based Doc2Query model for MSMARCO-v1 passage dataset and a subset of BEIR benchemark. We used ELECTRA cross-encoder to get the relevance scores between the document text and its expansion queries. More details in the study repo here.

Structure

All files are .jsonl files with the following three columns per line: ["id", "predicted_queries","querygen_score"]. So, each file contains the document id, the expansion queries and their corresponding ELECTRA relevance scores. Here are the matching of each dataset:

msmarco-v1-80-scored-queries.jsonl is for MSMarco-v1 dataset.

dbpedia-20-scored-queries.jsonl is for DBPedia dataset.

quora-20-scored-queries.jsonl is for Quora dataset.

robust04-20-scored-queries.jsonl is for Robust04 dataset.

trec-covid-20-scored-queries.jsonl is for TREC-COVID dataset.

webis-touche2020-20-scored-queries.jsonl is for Touché-2020 dataset.

Credit

The N=80 expansion queries of MSMARCO-v1 were copied from this repository. Please cite their work.

The N=20 expansion queries of BEIR benchemark were copied from this repository. Please cite their work.

Citation

If you used any piece of this repository, please consider citing our work:

@inproceedings{mansour2024revisit,
  title={Revisiting Document Expansion and Filtering for Effective First-Stage Retrieval},
  author={Mansour, Watheq and Zhuang, Shengyao and Zhuang, Guido and Mackenzie, Joel},
  booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  year={2024},
  publisher = {Association for Computing Machinery},
  series = {SIGIR '24}
}

license: cc-by-4.0