ZhangYuanhan commited on
Commit
f2a5f8b
1 Parent(s): 760dce5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -3
README.md CHANGED
@@ -1,3 +1,52 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: apache-2.0
4
+ ---
5
+
6
+ <br>
7
+
8
+ # LLaVA-Next-Video Model Card
9
+
10
+ ## Model details
11
+
12
+ **Model type:**
13
+ <br>
14
+ LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
15
+ <br>
16
+ Base LLM: [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)
17
+
18
+ **Model date:**
19
+ <br>
20
+ LLaVA-NeXT-Video-32B-Qwen was trained in June 2024.
21
+
22
+ **Paper or resources for more information:**
23
+ <br>
24
+ https://github.com/LLaVA-VL/LLaVA-NeXT
25
+
26
+ ## License
27
+ [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B) license.
28
+
29
+
30
+ ## Where to send questions or comments about the model
31
+ https://github.com/LLaVA-VL/LLaVA-NeXT/issues
32
+
33
+ ## Intended use
34
+ **Primary intended uses:**
35
+ <br>
36
+ The primary use of LLaVA is research on large multimodal models and chatbots.
37
+
38
+ **Primary intended users:**
39
+ <br>
40
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
41
+
42
+ ## Training dataset
43
+
44
+ ### Image
45
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
46
+ - 158K GPT-generated multimodal instruction-following data.
47
+ - 500K academic-task-oriented VQA data mixture.
48
+ - 50K GPT-4V data mixture.
49
+ - 40K ShareGPT data.
50
+ ### Video
51
+ - In hourse data
52
+