ziyjiang commited on
Commit
71df8f9
1 Parent(s): da24e90

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -7,4 +7,9 @@ language:
7
  base_model:
8
  - llava-hf/llava-v1.6-mistral-7b-hf
9
  library_name: transformers
10
- ---
 
 
 
 
 
 
7
  base_model:
8
  - llava-hf/llava-v1.6-mistral-7b-hf
9
  library_name: transformers
10
+ ---
11
+
12
+ A new checkpoint trained using [llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) with an enhanced training setup (LoRA tuning, batch size of 2048, maximum sub-dataset size of 100k). This model has shown significantly improved performance on MMEB & Flickr30K compared to the previous Phi-3.5-based model.
13
+
14
+ This repo contains the code and data for [VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks](https://arxiv.org/abs/2410.05160). In this paper, we focus on building a unified multimodal embedding model suitable for a wide range of tasks. Our approach is based on transforming an existing, well-trained Vision-Language Model (VLM) into an embedding model. The core idea is to append an [EOS] token at the end of the input sequence, which serves as the representation for the combined multimodal inputs.
15
+