v-mllm-13b / README.md
xjli's picture
commit models
6e0ed93
|
raw
history blame
909 Bytes
metadata
license: llama2

v-MLLM Model Card

Model details

Model type: v-MLLM is an open-source MLLM trained on Visual-Modality Instruction (VIM) corpus, it can robustly follow the text-modality instructions and visual-modality instructions.

Model date: v-MLLM-13B was trained in January 2024.

Github for more information: https://github.com/VIM-Bench/VIM_TOOL

License

v-MLLM is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Intended use

Primary intended uses: The primary use of v-MLLM is for research on multimodal large language models.

Primary intended users: The primary intended users of the model are researchers in computer vision, natural language processing, machine learning, and artificial intelligence.

Training dataset

  • 846k VIM corpus based on LVIS-Instruct4V corpus.