File size: 6,434 Bytes
dd0b4f3
46d30f8
 
dd0b4f3
 
 
46d30f8
 
 
 
 
dd0b4f3
 
46d30f8
dd0b4f3
 
 
 
 
 
 
 
 
46d30f8
dd0b4f3
46d30f8
 
 
dd0b4f3
 
46d30f8
dd0b4f3
 
 
 
 
 
 
 
 
46d30f8
dd0b4f3
46d30f8
 
 
dd0b4f3
 
46d30f8
dd0b4f3
 
 
 
 
 
46d30f8
dd0b4f3
46d30f8
 
dd0b4f3
46d30f8
 
 
dd0b4f3
 
46d30f8
dd0b4f3
 
 
 
46d30f8
dd0b4f3
46d30f8
dd0b4f3
46d30f8
dd0b4f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2852d45
dd0b4f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
---
tags:
- mteb
model-index:
- name: ListConRanker
  results:
  - task:
      type: Reranking
    dataset:
      type: C-MTEB/CMedQAv1-reranking
      name: MTEB CMedQAv1
      config: default
      split: test
      revision: None
    metrics:
    - type: map
      value: 90.55366308098787
    - type: mrr_1
      value: 87.8
    - type: mrr_10
      value: 92.45134920634919
    - type: mrr_5
      value: 92.325
  - task:
      type: Reranking
    dataset:
      type: C-MTEB/CMedQAv2-reranking
      name: MTEB CMedQAv2
      config: default
      split: test
      revision: None
    metrics:
    - type: map
      value: 89.38076135722042
    - type: mrr_1
      value: 85.9
    - type: mrr_10
      value: 91.28769841269842
    - type: mrr_5
      value: 91.08999999999999
  - task:
      type: Reranking
    dataset:
      type: C-MTEB/Mmarco-reranking
      name: MTEB MMarcoReranking
      config: default
      split: dev
      revision: None
    metrics:
    - type: map
      value: 43.881461866703894
    - type: mrr_1
      value: 32.0
    - type: mrr_10
      value: 44.534126984126985
    - type: mrr_5
      value: 43.45
  - task:
      type: Reranking
    dataset:
      type: C-MTEB/T2Reranking
      name: MTEB T2Reranking
      config: default
      split: dev
      revision: None
    metrics:
    - type: map
      value: 69.16513825032682
    - type: mrr_1
      value: 67.36628300609343
    - type: mrr_10
      value: 80.06914890758831
    - type: mrr_5
      value: 79.69137892123675
---

# ListConRanker
## Model

- We propose a **List**wise-encoded **Con**trastive text re**Ranker** (**ListConRanker**),  includes a ListTransformer module for listwise encoding. The ListTransformer can facilitate global contrastive information learning between passage features, including the clustering of similar passages, the clustering between dissimilar passages, and the distinction between similar and dissimilar passages. Besides, we propose ListAttention to help ListTransformer maintain the features of the query while learning global comparative information.
- The training loss function is Circle Loss[1]. Compared with cross-entropy loss and ranking loss, it can solve the problems of low data efficiency and unsmooth gradient change.

## Data
The training data consists of approximately 2.6 million queries, each corresponding to multiple passages. The data comes from the training sets of several datasets, including cMedQA1.0, cMedQA2.0, MMarcoReranking, T2Reranking, huatuo, MARC, XL-sum, CSL and so on.

## Training
We trained the model in two stages. In the first stage, we freeze the parameters of embedding model and only train the ListTransformer for 4 epochs with a batch size of 1024. In the second stage, we do not freeze any parameter and train for another 2 epochs with a batch size of 256.

## Inference
Due to the limited memory of GPUs, we input about 20 passages at a time for each query during training. However, during actual use, there may be situations where far more than 20 passages are input at the same time (e.g, MMarcoReranking). 

To reduce the discrepancy between training and inference, we propose iterative inference. The iterative inference feeds the passages into the ListConRanker multiple times, and each time it only decides the ranking of the passage at the end of the list.

## Performance
| Model      | cMedQA1.0 | cMedQA2.0 | MMarcoReranking | T2Reranking | Avg. |
| :--- | :---: | :---: | :---: | :---: | :---: |
| LdIR-Qwen2-reranker-1.5B | 86.50 | 87.11 | 39.35 | 68.84 | 70.45 |
| zpoint-large-embedding-zh | 91.11 | 90.07 | 38.87 | 69.29 | 72.34 |
| xiaobu-embedding-v2 | 90.96 | 90.41 | 39.91 | 69.03 | 72.58 |
| Conan-embedding-v1 | 91.39 | 89.72 | 41.58 | 68.36 | 72.76 |
| ListConRanker | 90.55 | 89.38 | 43.88 | 69.17 | **73.25** |
| - w/o Iterative Inference | 90.20 | 89.98 | 37.52 | 69.17 | 71.72 |

## How to use
```python
from modules.listconranker import ListConRanker

reranker = ListConRanker('./', use_fp16=True, list_transformer_layer=2)

# [query, passages_1, passage_2, ..., passage_n]
batch = [
    [
        '皮蛋是寒性的食物吗', # query
        '营养医师介绍皮蛋是属于凉性的食物,中医认为皮蛋可治眼疼、牙疼、高血压、耳鸣眩晕等疾病。体虚者要少吃。',  # passage_1
        '皮蛋这种食品是在中国地域才常见的传统食品,它的生长汗青也是非常的悠长。',  # passage_2
        '喜欢皮蛋的人会觉得皮蛋是最美味的食物,不喜欢皮蛋的人则觉得皮蛋是黑暗料理,尤其很多外国朋友都不理解我们吃皮蛋的习惯' # passage_3
    ],
    [
        '月有阴晴圆缺的意义', # query
        '形容的是月所有的状态,晴朗明媚,阴沉混沌,有月圆时,但多数时总是有缺陷。',  # passage_1
        '人有悲欢离合,月有阴晴圆缺这句话意思是人有悲欢离合的变迁,月有阴晴圆缺的转换。',  # passage_2
        '既然是诗歌,又哪里会有真正含义呢? 大概可以说:人生有太多坎坷,苦难,从容坦荡面对就好。', # passage_3
        '一零七六年苏轼贬官密州,时年四十一岁的他政治上很不得志,时值中秋佳节,非常想念自己的弟弟子由内心颇感忧郁,情绪低沉,有感而发写了这首词。' # passage_4
    ]
]

# for conventional inference, please manage the batch size by yourself
scores = reranker.compute_score(batch)
print(scores)
# [[0.5126953125, 0.331298828125, 0.3642578125], [0.63671875, 0.71630859375, 0.42822265625, 0.35302734375]]

# for iterative inferfence, only a batch size of 1 is supported
# the scores do not indicate similarity but are intended only for ranking
scores = reranker.iterative_inference(batch[0])
print(scores)
# [0.5126953125, 0.331298828125, 0.3642578125]
```

To reproduce the results with iterative inference, please run:
```bash
python3 eval_listconranker_iterative_inference.py
```

To reproduce the results without iterative inference, please run:
```bash
python3 eval_listconranker.py
```

## Reference
1. https://arxiv.org/abs/2002.10857
2. https://github.com/FlagOpen/FlagEmbedding
3. https://arxiv.org/abs/2408.15710

## License
This work is licensed under a [MIT License](https://opensource.org/license/MIT) and the weight of models is licensed under a [Creative Commons Attribution-NonCommercial 4.0 International License](https://creativecommons.org/licenses/by-nc/4.0/).