Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
lmms-lab
's Collections
Multimodal-SAE
LLaVA-Critic
LLaVA-Video
LLaVA-OneVision
LMMs-Eval
LongVA
LLaVA-Next-Interleave
LLaVA-NeXT
LMMs-Eval-Lite
LLaVA-Video
updated
Oct 5
Models focus on video understanding (previously known as LLaVA-NeXT-Video).
Upvote
55
+45
Video Instruction Tuning With Synthetic Data
Paper
•
2410.02713
•
Published
Oct 3
•
38
lmms-lab/LLaVA-Video-178K
Viewer
•
Updated
Oct 11
•
1.63M
•
30.5k
•
96
lmms-lab/LLaVA-Video-7B-Qwen2
Video-Text-to-Text
•
Updated
Oct 25
•
77.3k
•
48
lmms-lab/LLaVA-Video-72B-Qwen2
Text Generation
•
Updated
Oct 25
•
1.37k
•
16
lmms-lab/LLaVA-Video-7B-Qwen2-Video-Only
Text Generation
•
Updated
Oct 4
•
6.42k
•
3
lmms-lab/LLaVA-NeXT-Video-32B-Qwen
Video-Text-to-Text
•
Updated
Oct 4
•
802
•
14
Upvote
55
+51
Share collection
View history
Collection guide
Browse collections