File size: 1,601 Bytes
643af66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00ed3f3
 
 
 
 
 
 
 
643af66
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: llama2
---

# v-MLLM Model Card

## Model details

**Model type:**
v-MLLM is an open-source MLLM trained on Visual-Modality Instruction (VIM) corpus, it can robustly follow the text-modality instructions and visual-modality instructions.

**Model date:**
v-MLLM-13B was trained in January 2024.

**Github for more information:**
https://github.com/VIM-Bench/VIM_TOOL

## License
v-MLLM is licensed under the LLAMA 2 Community License, 
Copyright (c) Meta Platforms, Inc. All Rights Reserved.

## Intended use
**Primary intended uses:**
The primary use of v-MLLM is for research on multimodal large language models.

**Primary intended users:**
The primary intended users of the model are researchers in computer vision, natural language processing, machine learning, and artificial intelligence.

## Training dataset
- 846k VIM corpus based on LVIS-Instruct4V corpus.

# Citation

Please kindly cite our paper if you find our resources useful:

```
@misc{li2024text,
      title={Text as Images: Can Multimodal Large Language Models Follow Printed Instructions in Pixels?}, 
      author={Xiujun Li and Yujie Lu and Zhe Gan and Jianfeng Gao and William Yang Wang and Yejin Choi},
      year={2024},
      eprint={2311.17647},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@misc{lu2023vim,
      title={VIM: Probing Multimodal Large Language Models for Visual Embedded Instruction Following}, 
      author={Yujie Lu and Xiujun Li and William Yang Wang and Yejin Choi},
      year={2023},
      eprint={2311.17647},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
```