Edit model card

T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs

💻 GitHub   |    📑 Paper   

Model Summary

License

Model License

  • The model is built on top of the pre-trained model: HuggingFaceM4/Idefics3-8B-Llama3. We release the fine-tuned Idefics3 checkpoints under the Apache 2.0 license.
  • The code in this repo is released under the Apache-2.0 License.

Statement

  • As an LLM, Idefics3-8B-Llama3 generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by Idefics3-8B-Llama3 does not represent the views and positions of the model developers
  • We will not be liable for any problems arising from the use of the Idefics3-8B-Llama3 open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.

Training dataset

  • 100K video instruction data from Video-ChatGPT
  • 100K video caption data from ShareGemini
Downloads last month
0
Safetensors
Model size
8.46B params
Tensor type
BF16
·
Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.

Model tree for xjtupanda/Idefics3-200K-video-finetune

Finetuned
(11)
this model

Datasets used to train xjtupanda/Idefics3-200K-video-finetune

Collection including xjtupanda/Idefics3-200K-video-finetune