liuhaotian commited on
Commit
cf9e42a
1 Parent(s): 035fefc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ ---
4
+
5
+ <br>
6
+ <br>
7
+
8
+ # LLaVA Model Card
9
+
10
+ ## Model details
11
+
12
+ **Model type:**
13
+ LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
14
+ It is an auto-regressive language model, based on the transformer architecture.
15
+
16
+ **Model date:**
17
+ LLaVA-v1-0719-336px-LoRA-Vicuna-13B-v1.3 was trained in July 2023.
18
+
19
+ **Paper or resources for more information:**
20
+ https://llava-vl.github.io/
21
+
22
+ ## License
23
+ Non-commerical Use.
24
+
25
+ **Where to send questions or comments about the model:**
26
+ https://github.com/haotian-liu/LLaVA/issues
27
+
28
+ ## Intended use
29
+ **Primary intended uses:**
30
+ The primary use of LLaVA is research on large multimodal models and chatbots.
31
+
32
+ **Primary intended users:**
33
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
34
+
35
+ ## Training dataset
36
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
37
+ - 80K GPT-generated multimodal instruction-following data.
38
+
39
+ ## Evaluation dataset
40
+ A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs.
41
+ We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset.
42
+ See https://llava-vl.github.io/ for more details.