VLM2Vec-LLaVa-Next / README.md
ziyjiang's picture
Update README.md
ca32ec0 verified
|
raw
history blame
1.81 kB
metadata
license: apache-2.0
datasets:
  - TIGER-Lab/MMEB-train
language:
  - en
base_model:
  - llava-hf/llava-v1.6-mistral-7b-hf
library_name: transformers

A new checkpoint trained using llava-v1.6-mistral-7b-hf with an enhanced training setup (LoRA tuning, batch size of 2048, maximum sub-dataset size of 100k). This model has shown significantly improved performance on MMEB & Flickr30K compared to the previous Phi-3.5-based model.

This repo contains the code and data for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks. In this paper, we focus on building a unified multimodal embedding model suitable for a wide range of tasks. Our approach is based on transforming an existing, well-trained Vision-Language Model (VLM) into an embedding model.

Github

Data

Our model is being trained on MMEB-train and evaluated on MMEB-eval with contrastive learning. We only use in-batch negatives for training.

Experimental Results

VLM2Vec-LlaVa-Next could outperform the baselines and other version of VLM2Vec by a large margin.

image/png

How to use VLM2Vec-LlaVa-Next

Citation

@article{jiang2024vlm2vec,
  title={VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks},
  author={Jiang, Ziyan and Meng, Rui and Yang, Xinyi and Yavuz, Semih and Zhou, Yingbo and Chen, Wenhu},
  journal={arXiv preprint arXiv:2410.05160},
  year={2024}
}