--- license: apache-2.0 datasets: - TIGER-Lab/MMEB-train language: - en base_model: - llava-hf/llava-v1.6-mistral-7b-hf library_name: transformers --- A new checkpoint trained using [llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) with an enhanced training setup (LoRA tuning, batch size of 2048, maximum sub-dataset size of 100k). This model has shown significantly improved performance on MMEB & Flickr30K compared to the previous Phi-3.5-based model. This repo contains the code and data for [VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks](https://arxiv.org/abs/2410.05160). In this paper, we focus on building a unified multimodal embedding model suitable for a wide range of tasks. Our approach is based on transforming an existing, well-trained Vision-Language Model (VLM) into an embedding model. ## Github - [Github](https://github.com/TIGER-AI-Lab/VLM2Vec) ## Data Our model is being trained on MMEB-train and evaluated on MMEB-eval with contrastive learning. We only use in-batch negatives for training. - Train data: https://huggingface.co/datasets/TIGER-Lab/MMEB-train - Eval data: https://huggingface.co/datasets/TIGER-Lab/MMEB-eval ## Experimental Results VLM2Vec-LlaVa-Next could outperform the baselines and other version of VLM2Vec by a large margin. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64778fb8168cb428e00f69b0/IaKuKe5ps_bvDTf98C0rt.png) ## How to use VLM2Vec-LlaVa-Next ## Citation ``` @article{jiang2024vlm2vec, title={VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks}, author={Jiang, Ziyan and Meng, Rui and Yang, Xinyi and Yavuz, Semih and Zhou, Yingbo and Chen, Wenhu}, journal={arXiv preprint arXiv:2410.05160}, year={2024} }