tarsur909 commited on
Commit
6f4cc2c
1 Parent(s): 1d2d12c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -1,3 +1,15 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - Qwen/Qwen2.5-Coder-7B-Instruct
5
+ ---
6
+
7
+
8
+ `CodeRankLLM` is a 7B LLM fine-tuned for listwise code-reranking. When combined with performant code retrievers like [`CodeRankEmbed`](https://huggingface.co/cornstack/CodeRankEmbed), it significantly enhances the quality of retrieved results for various code retrieval tasks.
9
+
10
+ We release the scripts to evaluate our model's performance [here](https://github.com/gangiswag/cornstack).
11
+
12
+
13
+ ## Training
14
+
15
+ Our code reranker is based on LLM-based listwise reranking, which has gained prominence for the ability to score multiple passages simultaneously. Training data for listwise reranking was generated by selecting 50,000 <query, positive, negatives> tuples from our high-quality dataset [CoRNStack](https://gangiswag.github.io/cornstack/), filtered to ensure higher similarity scores and better ranks for the positives. Since CoRNStack doesn't contain the ranked ordering data required for training listwise rerankers, we leverage [Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) LLM provided ranked orderings for each example to serve as ranking supervision. We initialize our reranker with [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) and fine-tune using a language modeling objective that minimizes the prediction error of the next token in the sequence.