File size: 3,654 Bytes
d6a917d 9aa2ffa d6a917d 9aa2ffa d6a917d 9aa2ffa d6a917d 9aa2ffa d6a917d 9aa2ffa d6a917d 9aa2ffa d6a917d 9aa2ffa d6a917d 9aa2ffa d6a917d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
# Enhanced-BGE-M3-with-CLP-and-MoE ([paper](https://arxiv.org/abs/2412.17364), [code](https://github.com/CreaLabs/Enhanced-BGE-M3-with-CLP-and-MoE))
## Contrastive Learning Penalty (CLP)
CLP is a novel loss function designed to address the limitations of existing contrastive learning methods for improved performance in information retrieval tasks. It incorporates a penalty term that encourages the model to learn more discriminative representations by considering the similarity between negative samples and their corresponding queries.
The CLP loss function is defined as follows:
<img src="https://raw.githubusercontent.com/CreaLabs/Enhanced-BGE-M3-with-CLP-and-MoE/main/imgs/clpl_formula.PNG" width="1000"/>
where:
* h<sub>i</sub>: The embedding of the query for the i-th instance.
* h<sub>i</sub><sup>+</sup>: The embedding of the positive sample for the i-th instance.
* H<sup>'</sup>: The set of negative samples for the i-th instance.
* h<sup>'</sup>: The embedding of the negative sample's query.
* H<sup>*</sup>: the set of positive queries for the documents corresponding to the negative samples
* sim(a, b): The cosine similarity function between embeddings a and b.
* τ: The temperature parameter.
* λ: The balancing parameter between the contrastive loss and the penalty term.
The difference between Contrastive Learning Loss and Contrastive Learning Penalty Loss:
<img src="https://raw.githubusercontent.com/CreaLabs/Enhanced-BGE-M3-with-CLP-and-MoE/main/imgs/figure1.PNG" width="1000"/>
## Specs
- Model
| Model Name | Introduction |
|---|---|
| [bge-m3-ko-CLPL-interMoE](https://huggingface.co/CreaLabs/bge-m3-ko-CLPL-interMoE) | This model applies CLPL and MoE, trained on the MIRACL Korean training dataset. MoE is applied to the intermediate layer, and only the MoE layers were trained during fine-tuning. |
| [bge-m3-fa-CLPL-interMoE](https://huggingface.co/CreaLabs/bge-m3-fa-CLPL-interMoE) | This model applies CLPL and MoE, trained on the MIRACL Persian training dataset. MoE is applied to the intermediate layer, and only the MoE layers were trained during fine-tuning. |
| [bge-m3-hi-CLPL-interMoE](https://huggingface.co/CreaLabs/bge-m3-hi-CLPL-interMoE) | This model applies CLPL and MoE, trained on the MIRACL Hindi training dataset. MoE is applied to the intermediate layer, and only the MoE layers were trained during fine-tuning. |
- Data
Performing negative sampling using the ANCE methodology and generating negative sample's positive queries through the Gemini 1.5 Pro model, which are required for CLPL.
| Dataset | Introduction |
|---|---|
| [ko_CLPL_train_data](https://github.com/Dream-Forge-Studios/Enhanced-BGE-M3-with-CLPL-and-MoE/blob/main/data/ko_CLPL_train_data.jsonl) | MIRACL Korean CLPL training dataset |
| [fa_CLPL_train_data](https://github.com/Dream-Forge-Studios/Enhanced-BGE-M3-with-CLPL-and-MoE/blob/main/data/fa_CLPL_train_data.jsonl) | MIRACL Persian CLPL training dataset |
| [hi_CLPL_train_data](https://github.com/Dream-Forge-Studios/Enhanced-BGE-M3-with-CLPL-and-MoE/blob/main/data/hi_CLPL_train_data.jsonl) | MIRACL Hindi CLPL training dataset |
## Evaluation
<img src="https://raw.githubusercontent.com/CreaLabs/Enhanced-BGE-M3-with-CLP-and-MoE/main/imgs/table4.PNG" width="1000"/>
## Citation
@misc{yu2024efficientfinetuningmethodologytext,
title={Efficient fine-tuning methodology of text embedding models for information retrieval: contrastive learning penalty (clp)},
author={Jeongsu Yu},
year={2024},
eprint={2412.17364},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2412.17364},
} |