ManishThota
commited on
Commit
•
b8e9c48
1
Parent(s):
f72f988
Update README.md
Browse files
README.md
CHANGED
@@ -1,55 +0,0 @@
|
|
1 |
-
---
|
2 |
-
inference: false
|
3 |
-
license: llama2
|
4 |
-
---
|
5 |
-
|
6 |
-
<br>
|
7 |
-
|
8 |
-
# LLaVA-Next-Video Model Card
|
9 |
-
|
10 |
-
## Model details
|
11 |
-
|
12 |
-
**Model type:**
|
13 |
-
<br>
|
14 |
-
LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
|
15 |
-
<br>
|
16 |
-
Base LLM: lmsys/vicuna-7b-v1.5
|
17 |
-
|
18 |
-
**Model date:**
|
19 |
-
<br>
|
20 |
-
LLaVA-Next-Video-7B-DPO was trained in April 2024.
|
21 |
-
|
22 |
-
**Paper or resources for more information:**
|
23 |
-
<br>
|
24 |
-
https://github.com/LLaVA-VL/LLaVA-NeXT
|
25 |
-
|
26 |
-
## License
|
27 |
-
Llama 2 is licensed under the LLAMA 2 Community License,
|
28 |
-
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
|
29 |
-
|
30 |
-
## Where to send questions or comments about the model
|
31 |
-
https://github.com/LLaVA-VL/LLaVA-NeXT/issues
|
32 |
-
|
33 |
-
## Intended use
|
34 |
-
**Primary intended uses:**
|
35 |
-
<br>
|
36 |
-
The primary use of LLaVA is research on large multimodal models and chatbots.
|
37 |
-
|
38 |
-
**Primary intended users:**
|
39 |
-
<br>
|
40 |
-
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
41 |
-
|
42 |
-
## Training dataset
|
43 |
-
|
44 |
-
### Image
|
45 |
-
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
|
46 |
-
- 158K GPT-generated multimodal instruction-following data.
|
47 |
-
- 500K academic-task-oriented VQA data mixture.
|
48 |
-
- 50K GPT-4V data mixture.
|
49 |
-
- 40K ShareGPT data.
|
50 |
-
### Video
|
51 |
-
- 100K VideoChatGPT-Instruct.
|
52 |
-
- 17k video preference data: https://huggingface.co/datasets/ShareGPTVideo/train_video_and_instruction
|
53 |
-
|
54 |
-
## Evaluation dataset
|
55 |
-
A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|