🧠 LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences

This is the official model for LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences.

The LLM-QE model is designed to enhance query expansion in information retrieval tasks by leveraging Large Language Models (LLMs), improving the alignment between LLMs and ranking preferences during query expansion.


πŸ“„ Paper

For a detailed explanation of the methodology and experiments, please refer to our paper:
LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences


πŸ”„ Reproduce the Results

To reproduce the experiments and benchmarks from the paper, follow the instructions provided in the official GitHub repository: πŸ‘‰ GitHub: NEUIR/LLM-QE.

πŸ›  Model Details

  • Model Name: LLM-QE-Contriever
  • Architecture: Contriever Model with supervised contrastive learning training using the query expansions

πŸ“ˆ Usage:

You can use this model for query expansion tasks, particularly in information retrieval systems that benefit from alignment with ranking preferences.

πŸ”– Citation

If you use LLM-QE in your work, please consider citing our paper:

@misc{yao2025llmqeimprovingqueryexpansion,
      title={LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences}, 
      author={Sijia Yao and Pengcheng Huang and Zhenghao Liu and Yu Gu and Yukun Yan and Shi Yu and Ge Yu},
      year={2025},
      eprint={2502.17057},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2502.17057}, 
}
Downloads last month
11
Safetensors
Model size
109M params
Tensor type
F32
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for yaosijiaaaaa/LLM-QE-Contriever

Finetuned
(5)
this model