Latte-1 / README.md
maxin-cn's picture
Link model to paper page (#1)
0653024 verified
metadata
license: apache-2.0
pipeline_tag: text-to-video

Latte: Latent Diffusion Transformer for Video Generation

This repo contains text-to-video generation pre-trained weights for our paper exploring latent diffusion models with transformers (Latte). You can find more visualizations on our project page. If you want to obtain pre-trained weights on FaceForensics, SkyTimelapse, UCF101, and Taichi-HD, please refer to here.

News

  • (πŸ”₯ New) May. 23, 2024. πŸ’₯ Latte-1 for Text-to-video generation is released! You can download pre-trained model here. Latte-1 also supports Text-to-image generation, please run bash sample/t2i.sh.

  • (πŸ”₯ New) Mar. 20, 2024. πŸ’₯ An updated LatteT2V model is coming soon, stay tuned!

  • (πŸ”₯ New) Feb. 24, 2024. πŸ’₯ We are very grateful that researchers and developers like our work. We will continue to update our LatteT2V model, hoping that our efforts can help the community develop. Our Latte discord channel is created for discussions. Coders are welcome to contribute.

  • (πŸ”₯ New) Jan. 9, 2024. πŸ’₯ An updated LatteT2V model initialized with the PixArt-Ξ± is released, the checkpoint can be found here.

  • (πŸ”₯ New) Oct. 31, 2023. πŸ’₯ The training and inference code is released. All checkpoints (including FaceForensics, SkyTimelapse, UCF101, and Taichi-HD) can be found here. In addition, the LatteT2V inference code is provided.

Contact Us

Yaohui Wang: wangyaohui@pjlab.org.cn Xin Ma: xin.ma1@monash.edu

Citation

If you find this work useful for your research, please consider citing it.

@article{ma2024latte,
  title={Latte: Latent Diffusion Transformer for Video Generation},
  author={Ma, Xin and Wang, Yaohui and Jia, Gengyun and Chen, Xinyuan and Liu, Ziwei and Li, Yuan-Fang and Chen, Cunjian and Qiao, Yu},
  journal={arXiv preprint arXiv:2401.03048},
  year={2024}
}

Paper: https://huggingface.co/papers/2401.03048

Acknowledgments

Latte has been greatly inspired by the following amazing works and teams: DiT and PixArt-Ξ±, we thank all the contributors for open-sourcing.