File size: 1,318 Bytes
fbffc1c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
inference: false
pipeline_tag: image-text-to-text
---
<br>
<br>

# AVG-LLaVA Model Card

## Model details

**Model type:**
AVG-LLaVA is an open-source LMM that can adaptively select the appropriate visual granularity 
based on the input image and instruction.
It is an auto-regressive language model, based on the transformer architecture.
Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)

**Paper or resources for more information:**
https://arxiv.org/abs/2410.02745

## License
Llama 2 is licensed under the LLAMA 2 Community License, 
Copyright (c) Meta Platforms, Inc. All Rights Reserved.

**Where to send questions or comments about the model:**
https://github.com/DeepLearnXMU/AVG-LLaVA/issues

## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.

**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

## Training dataset
- ShareGPT4V Mix665K
- 200K GPT4V-generated instruction data (ALLaVA)
- 200K various VQA data

## Evaluation dataset
A collection of 11 benchmarks, including general VQA benchmarks, text-oriented VQA benchmarks, and general multimodal benchmarks.