File size: 1,809 Bytes
b71a736
 
 
 
 
 
 
 
46ae4e3
71df8f9
 
 
 
6b6864a
71df8f9
1710b8a
 
 
 
 
 
 
17b51f9
1710b8a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
license: apache-2.0
datasets:
- TIGER-Lab/MMEB-train
language:
- en
base_model:
- llava-hf/llava-v1.6-mistral-7b-hf
library_name: transformers
---

A new checkpoint trained using [llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) with an enhanced training setup (LoRA tuning, batch size of 2048, maximum sub-dataset size of 100k). This model has shown significantly improved performance on MMEB & Flickr30K compared to the previous Phi-3.5-based model.

This repo contains the code and data for [VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks](https://arxiv.org/abs/2410.05160). In this paper, we focus on building a unified multimodal embedding model suitable for a wide range of tasks. Our approach is based on transforming an existing, well-trained Vision-Language Model (VLM) into an embedding model.

## Github
 - [Github](https://github.com/TIGER-AI-Lab/VLM2Vec)


## Data

Our model is being trained on MMEB-train and evaluated on MMEB-eval with contrastive learning. We only use in-batch negatives for training. 

 - Train data: https://huggingface.co/datasets/TIGER-Lab/MMEB-train
 - Eval data: https://huggingface.co/datasets/TIGER-Lab/MMEB-eval


## Experimental Results
VLM2Vec-LlaVa-Next could outperform the baselines and other version of VLM2Vec by a large margin.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64778fb8168cb428e00f69b0/IaKuKe5ps_bvDTf98C0rt.png)


## How to use VLM2Vec-LlaVa-Next



## Citation
```
@article{jiang2024vlm2vec,
  title={VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks},
  author={Jiang, Ziyan and Meng, Rui and Yang, Xinyi and Yavuz, Semih and Zhou, Yingbo and Chen, Wenhu},
  journal={arXiv preprint arXiv:2410.05160},
  year={2024}
}