Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,41 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
pipeline_tag: image-text-to-text
|
4 |
+
---
|
5 |
+
<br>
|
6 |
+
<br>
|
7 |
+
|
8 |
+
# AVG-LLaVA Model Card
|
9 |
+
|
10 |
+
## Model details
|
11 |
+
|
12 |
+
**Model type:**
|
13 |
+
AVG-LLaVA is an open-source LMM that can adaptively select the appropriate visual granularity
|
14 |
+
based on the input image and instruction.
|
15 |
+
It is an auto-regressive language model, based on the transformer architecture.
|
16 |
+
Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)
|
17 |
+
|
18 |
+
**Paper or resources for more information:**
|
19 |
+
https://arxiv.org/abs/2410.02745
|
20 |
+
|
21 |
+
## License
|
22 |
+
Llama 2 is licensed under the LLAMA 2 Community License,
|
23 |
+
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
|
24 |
+
|
25 |
+
**Where to send questions or comments about the model:**
|
26 |
+
https://github.com/DeepLearnXMU/AVG-LLaVA/issues
|
27 |
+
|
28 |
+
## Intended use
|
29 |
+
**Primary intended uses:**
|
30 |
+
The primary use of LLaVA is research on large multimodal models and chatbots.
|
31 |
+
|
32 |
+
**Primary intended users:**
|
33 |
+
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
34 |
+
|
35 |
+
## Training dataset
|
36 |
+
- ShareGPT4V Mix665K
|
37 |
+
- 200K GPT4V-generated instruction data (ALLaVA)
|
38 |
+
- 200K various VQA data
|
39 |
+
|
40 |
+
## Evaluation dataset
|
41 |
+
A collection of 11 benchmarks, including general VQA benchmarks, text-oriented VQA benchmarks, and general multimodal benchmarks.
|