metadata
license: other
license_link: https://huggingface.co/THUDM/CogVideoX-5b/blob/main/LICENSE
language:
- en
tags:
- cogvideox
- video-generation
- thudm
- text-to-video
inference: false
CogVideoX-5B
π δΈζι θ―» | π€ Huggingface Space | π Github | π arxiv
Demo Show
Model Introduction
CogVideoX is an open-source version of the video generation model originating from QingYing. The table below displays the list of video generation models we currently offer, along with their foundational information.
Model Name | CogVideoX-2B | CogVideoX-5B (This Repository) |
---|---|---|
Model Description | Entry-level model, balancing compatibility. Low cost for running and secondary development. | Larger model with higher video generation quality and better visual effects. |
Inference Precision | FP16* (Recommended), BF16, FP32, FP8*, INT8, no support for INT4 | BF16 (Recommended), FP16, FP32, FP8*, INT8, no support for INT4 |
Single GPU VRAM Consumption | FP16: 18GB using SAT / 12.5GB* using diffusers INT8: 7.8GB* using diffusers |
BF16: 26GB using SAT / 20.7GB* using diffusers INT8: 11.4GB* using diffusers |
Multi-GPU Inference VRAM Consumption | FP16: 10GB* using diffusers | BF16: 15GB* using diffusers |
Inference Speed (Step = 50, FP/BF16) |
Single A100: ~90 seconds Single H100: ~45 seconds |
Single A100: ~180 seconds Single H100: ~90 seconds |
Fine-tuning Precision | FP16 | BF16 |
Fine-tuning VRAM Consumption (per GPU) | 47 GB (bs=1, LORA) 61 GB (bs=2, LORA) 62GB (bs=1, SFT) |
63 GB (bs=1, LORA) 80 GB (bs=2, LORA) 75GB (bs=1, SFT) |
Prompt Language | English* | |
Prompt Length Limit | 226 Tokens | |
Video Length | 6 Seconds | |
Frame Rate | 8 Frames per Second | |
Video Resolution | 720 x 480, no support for other resolutions (including fine-tuning) | |
Positional Encoding | 3d_sincos_pos_embed | 3d_rope_pos_embed |
Data Explanation
- When testing with the diffusers library, the
enable_model_cpu_offload()
option andpipe.vae.enable_tiling()
optimization were enabled. This solution has not been tested for actual VRAM/memory usage on devices other than NVIDIA A100/H100. Generally, this solution can be adapted to all devices with NVIDIA Ampere architecture and above. If optimization is disabled, VRAM usage will increase significantly, with peak VRAM approximately 3 times the value in the table. - When performing multi-GPU inference, the
enable_model_cpu_offload()
optimization needs to be disabled. - Using an INT8 model will result in reduced inference speed. This is done to accommodate GPUs with lower VRAM, allowing inference to run properly with minimal video quality loss, though the inference speed will be significantly reduced.
- The 2B model is trained using
FP16
precision, while the 5B model is trained usingBF16
precision. It is recommended to use the precision used in model training for inference. FP8
precision must be used onNVIDIA H100
and above devices, requiring source installation of thetorch
,torchao
,diffusers
, andaccelerate
Python packages.CUDA 12.4
is recommended.- Inference speed testing also used the aforementioned VRAM optimization scheme. Without VRAM optimization, inference speed increases by about 10%. Only models using
diffusers
support quantization. - The model only supports English input; other languages can be translated to English during large model refinements.
Note
- Using SAT for inference and fine-tuning of SAT version models. Feel free to visit our GitHub for more information.
Quick Start π€
This model supports deployment using the huggingface diffusers library. You can deploy it by following these steps.
We recommend that you visit our GitHub and check out the relevant prompt optimizations and conversions to get a better experience.
- Install the required dependencies
# diffusers>=0.30.1
# transformers>=4.44.2
# accelerate>=0.33.0 (suggest install from source)
# imageio-ffmpeg>=0.5.1
pip install --upgrade transformers accelerate diffusers imageio-ffmpeg
- Run the code
import torch
from diffusers import CogVideoXPipeline
from diffusers.utils import export_to_video
prompt = "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance."
pipe = CogVideoXPipeline.from_pretrained(
"THUDM/CogVideoX-5b",
torch_dtype=torch.bfloat16
)
pipe.enable_model_cpu_offload()
pipe.vae.enable_tiling()
video = pipe(
prompt=prompt,
num_videos_per_prompt=1,
num_inference_steps=50,
num_frames=49,
guidance_scale=6,
generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]
export_to_video(video, "output.mp4", fps=8)
Explore the Model
Welcome to our github, where you will find:
- More detailed technical details and code explanation.
- Optimization and conversion of prompt words.
- Reasoning and fine-tuning of SAT version models, and even pre-release.
- Project update log dynamics, more interactive opportunities.
- CogVideoX toolchain to help you better use the model.
- INT8 model inference code support.
Model License
This model is released under the CogVideoX LICENSE.
Citation
@article{yang2024cogvideox,
title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
journal={arXiv preprint arXiv:2408.06072},
year={2024}
}