maxin-cn commited on
Commit
4051d56
1 Parent(s): 165dc29

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,167 +1,167 @@
1
- ---
2
- title: Latte-1
3
- app_file: demo.py
4
- sdk: gradio
5
- sdk_version: 4.37.2
6
- ---
7
- ## Latte: Latent Diffusion Transformer for Video Generation<br><sub>Official PyTorch Implementation</sub>
8
-
9
- <!-- ### [Paper](https://arxiv.org/abs/2401.03048v1) | [Project Page](https://maxin-cn.github.io/latte_project/) -->
10
-
11
- <!-- [![arXiv](https://img.shields.io/badge/arXiv-2401.03048-b31b1b.svg)](https://arxiv.org/abs/2401.03048) -->
12
- [![Arxiv](https://img.shields.io/badge/Arxiv-b31b1b.svg)](https://arxiv.org/abs/2401.03048)
13
- [![Project Page](https://img.shields.io/badge/Project-Website-blue)](https://maxin-cn.github.io/latte_project/)
14
- [![HF Demo](https://img.shields.io/static/v1?label=Demo&message=OpenBayes%E8%B4%9D%E5%BC%8F%E8%AE%A1%E7%AE%97&color=green)](https://openbayes.com/console/public/tutorials/UOeU0ywVxl7)
15
-
16
- [![Static Badge](https://img.shields.io/badge/Latte--1%20checkpoint%20(T2V)-HuggingFace-yellow?logoColor=violet%20Latte-1%20checkpoint)](https://huggingface.co/maxin-cn/Latte-1)
17
- [![Static Badge](https://img.shields.io/badge/Latte%20checkpoint%20-HuggingFace-yellow?logoColor=violet%20Latte%20checkpoint)](https://huggingface.co/maxin-cn/Latte)
18
-
19
- This repo contains PyTorch model definitions, pre-trained weights, training/sampling code and evaluation code for our paper exploring
20
- latent diffusion models with transformers (Latte). You can find more visualizations on our [project page](https://maxin-cn.github.io/latte_project/).
21
-
22
- > [**Latte: Latent Diffusion Transformer for Video Generation**](https://maxin-cn.github.io/latte_project/)<br>
23
- > [Xin Ma](https://maxin-cn.github.io/), [Yaohui Wang*](https://wyhsirius.github.io/), [Xinyuan Chen](https://scholar.google.com/citations?user=3fWSC8YAAAAJ), [Gengyun Jia](https://scholar.google.com/citations?user=_04pkGgAAAAJ&hl=zh-CN), [Ziwei Liu](https://liuziwei7.github.io/), [Yuan-Fang Li](https://users.monash.edu/~yli/), [Cunjian Chen](https://cunjian.github.io/), [Yu Qiao](https://scholar.google.com.hk/citations?user=gFtI-8QAAAAJ&hl=zh-CN)
24
- > (*Corresponding Author & Project Lead)
25
- <!-- > <br>Monash University, Shanghai Artificial Intelligence Laboratory,<br> NJUPT, S-Lab, Nanyang Technological University
26
-
27
- We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.
28
-
29
- ![The architecture of Latte](visuals/architecture.svg){width=20}
30
- -->
31
-
32
- <!--
33
- <div align="center">
34
- <img src="visuals/architecture.svg" width="650">
35
- </div>
36
-
37
- This repository contains:
38
-
39
- * 🪐 A simple PyTorch [implementation](models/latte.py) of Latte
40
- * ⚡️ **Pre-trained Latte models** trained on FaceForensics, SkyTimelapse, Taichi-HD and UCF101 (256x256). In addition, we provide a T2V checkpoint (512x512). All checkpoints can be found [here](https://huggingface.co/maxin-cn/Latte/tree/main).
41
-
42
- * 🛸 A Latte [training script](train.py) using PyTorch DDP.
43
- -->
44
-
45
- <video controls loop src="https://github.com/Vchitect/Latte/assets/7929326/a650cd84-2378-4303-822b-56a441e1733b" type="video/mp4"></video>
46
-
47
- ## News
48
- - (🔥 New) **Jul 11, 2024** 💥 **Latte-1 is now integrated into [diffusers](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/latte_transformer_3d.py). Thanks to [@yiyixuxu](https://github.com/yiyixuxu), [@sayakpaul](https://github.com/sayakpaul), [@a-r-r-o-w](https://github.com/a-r-r-o-w) and [@DN6](https://github.com/DN6).** You can easily run Latte using the following code. We also support inference with 4/8-bit quantization, which can reduce GPU memory from 17 GB to 9 GB. Please refer to this [tutorial](docs/latte_diffusers.md) for more information.
49
-
50
- ```
51
- from diffusers import LattePipeline
52
- from diffusers.models import AutoencoderKLTemporalDecoder
53
- from torchvision.utils import save_image
54
- import torch
55
- import imageio
56
-
57
- torch.manual_seed(0)
58
-
59
- device = "cuda" if torch.cuda.is_available() else "cpu"
60
- video_length = 16 # 1 (text-to-image) or 16 (text-to-video)
61
- pipe = LattePipeline.from_pretrained("maxin-cn/Latte-1", torch_dtype=torch.float16).to(device)
62
-
63
- # Using temporal decoder of VAE
64
- vae = AutoencoderKLTemporalDecoder.from_pretrained("maxin-cn/Latte-1", subfolder="vae_temporal_decoder", torch_dtype=torch.float16).to(device)
65
- pipe.vae = vae
66
-
67
- prompt = "a cat wearing sunglasses and working as a lifeguard at pool."
68
- videos = pipe(prompt, video_length=video_length, output_type='pt').frames.cpu()
69
- ```
70
-
71
- - (🔥 New) **May 23, 2024** 💥 **Latte-1** is released! Pre-trained model can be downloaded [here](https://huggingface.co/maxin-cn/Latte-1/tree/main/transformer). **We support both T2V and T2I**. Please run `bash sample/t2v.sh` and `bash sample/t2i.sh` respectively.
72
-
73
- <!--
74
- <div align="center">
75
- <img src="visuals/latteT2V.gif" width=88%>
76
- </div>
77
- -->
78
-
79
- - (🔥 New) **Feb 24, 2024** 💥 We are very grateful that researchers and developers like our work. We will continue to update our LatteT2V model, hoping that our efforts can help the community develop. Our Latte discord channel <a href="https://discord.gg/RguYqhVU92" style="text-decoration:none;">
80
- <img src="https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png" width="3%" alt="" /></a> is created for discussions. Coders are welcome to contribute.
81
-
82
- - (🔥 New) **Jan 9, 2024** 💥 An updated LatteT2V model initialized with the [PixArt-α](https://github.com/PixArt-alpha/PixArt-alpha) is released, the checkpoint can be found [here](https://huggingface.co/maxin-cn/Latte-0/tree/main/transformer).
83
-
84
- - (🔥 New) **Oct 31, 2023** 💥 The training and inference code is released. All checkpoints (including FaceForensics, SkyTimelapse, UCF101, and Taichi-HD) can be found [here](https://huggingface.co/maxin-cn/Latte/tree/main). In addition, the LatteT2V inference code is provided.
85
-
86
-
87
- ## Setup
88
-
89
- First, download and set up the repo:
90
-
91
- ```bash
92
- git clone https://github.com/Vchitect/Latte
93
- cd Latte
94
- ```
95
-
96
- We provide an [`environment.yml`](environment.yml) file that can be used to create a Conda environment. If you only want
97
- to run pre-trained models locally on CPU, you can remove the `cudatoolkit` and `pytorch-cuda` requirements from the file.
98
-
99
- ```bash
100
- conda env create -f environment.yml
101
- conda activate latte
102
- ```
103
-
104
-
105
- ## Sampling
106
-
107
- You can sample from our **pre-trained Latte models** with [`sample.py`](sample/sample.py). Weights for our pre-trained Latte model can be found [here](https://huggingface.co/maxin-cn/Latte). The script has various arguments to adjust sampling steps, change the classifier-free guidance scale, etc. For example, to sample from our model on FaceForensics, you can use:
108
-
109
- ```bash
110
- bash sample/ffs.sh
111
- ```
112
-
113
- or if you want to sample hundreds of videos, you can use the following script with Pytorch DDP:
114
-
115
- ```bash
116
- bash sample/ffs_ddp.sh
117
- ```
118
-
119
- If you want to try generating videos from text, just run `bash sample/t2v.sh`. All related checkpoints will download automatically.
120
-
121
- If you would like to measure the quantitative metrics of your generated results, please refer to [here](docs/datasets_evaluation.md).
122
-
123
- ## Training
124
-
125
- We provide a training script for Latte in [`train.py`](train.py). The structure of the datasets can be found [here](docs/datasets_evaluation.md). This script can be used to train class-conditional and unconditional
126
- Latte models. To launch Latte (256x256) training with `N` GPUs on the FaceForensics dataset
127
- :
128
-
129
- ```bash
130
- torchrun --nnodes=1 --nproc_per_node=N train.py --config ./configs/ffs/ffs_train.yaml
131
- ```
132
-
133
- or If you have a cluster that uses slurm, you can also train Latte's model using the following scripts:
134
-
135
- ```bash
136
- sbatch slurm_scripts/ffs.slurm
137
- ```
138
-
139
- We also provide the video-image joint training scripts [`train_with_img.py`](train_with_img.py). Similar to [`train.py`](train.py) scripts, these scripts can be also used to train class-conditional and unconditional
140
- Latte models. For example, if you want to train the Latte model on the FaceForensics dataset, you can use:
141
-
142
- ```bash
143
- torchrun --nnodes=1 --nproc_per_node=N train_with_img.py --config ./configs/ffs/ffs_img_train.yaml
144
- ```
145
-
146
- ## Contact Us
147
- **Yaohui Wang**: [wangyaohui@pjlab.org.cn](mailto:wangyaohui@pjlab.org.cn)
148
- **Xin Ma**: [xin.ma1@monash.edu](mailto:xin.ma1@monash.edu)
149
-
150
- ## Citation
151
- If you find this work useful for your research, please consider citing it.
152
- ```bibtex
153
- @article{ma2024latte,
154
- title={Latte: Latent Diffusion Transformer for Video Generation},
155
- author={Ma, Xin and Wang, Yaohui and Jia, Gengyun and Chen, Xinyuan and Liu, Ziwei and Li, Yuan-Fang and Chen, Cunjian and Qiao, Yu},
156
- journal={arXiv preprint arXiv:2401.03048},
157
- year={2024}
158
- }
159
- ```
160
-
161
-
162
- ## Acknowledgments
163
- Latte has been greatly inspired by the following amazing works and teams: [DiT](https://github.com/facebookresearch/DiT) and [PixArt-α](https://github.com/PixArt-alpha/PixArt-alpha), we thank all the contributors for open-sourcing.
164
-
165
-
166
- ## License
167
- The code and model weights are licensed under [LICENSE](LICENSE).
 
1
+ ---
2
+ title: Latte-1
3
+ app_file: demo.py
4
+ sdk: gradio
5
+ sdk_version: 4.37.2
6
+ ---
7
+ ## Latte: Latent Diffusion Transformer for Video Generation<br><sub>Official PyTorch Implementation</sub>
8
+
9
+ <!-- ### [Paper](https://arxiv.org/abs/2401.03048v1) | [Project Page](https://maxin-cn.github.io/latte_project/) -->
10
+
11
+ <!-- [![arXiv](https://img.shields.io/badge/arXiv-2401.03048-b31b1b.svg)](https://arxiv.org/abs/2401.03048) -->
12
+ [![Arxiv](https://img.shields.io/badge/Arxiv-b31b1b.svg)](https://arxiv.org/abs/2401.03048)
13
+ [![Project Page](https://img.shields.io/badge/Project-Website-blue)](https://maxin-cn.github.io/latte_project/)
14
+ [![HF Demo](https://img.shields.io/static/v1?label=Demo&message=OpenBayes%E8%B4%9D%E5%BC%8F%E8%AE%A1%E7%AE%97&color=green)](https://openbayes.com/console/public/tutorials/UOeU0ywVxl7)
15
+
16
+ [![Static Badge](https://img.shields.io/badge/Latte--1%20checkpoint%20(T2V)-HuggingFace-yellow?logoColor=violet%20Latte-1%20checkpoint)](https://huggingface.co/maxin-cn/Latte-1)
17
+ [![Static Badge](https://img.shields.io/badge/Latte%20checkpoint%20-HuggingFace-yellow?logoColor=violet%20Latte%20checkpoint)](https://huggingface.co/maxin-cn/Latte)
18
+
19
+ This repo contains PyTorch model definitions, pre-trained weights, training/sampling code and evaluation code for our paper exploring
20
+ latent diffusion models with transformers (Latte). You can find more visualizations on our [project page](https://maxin-cn.github.io/latte_project/).
21
+
22
+ > [**Latte: Latent Diffusion Transformer for Video Generation**](https://maxin-cn.github.io/latte_project/)<br>
23
+ > [Xin Ma](https://maxin-cn.github.io/), [Yaohui Wang*](https://wyhsirius.github.io/), [Xinyuan Chen](https://scholar.google.com/citations?user=3fWSC8YAAAAJ), [Gengyun Jia](https://scholar.google.com/citations?user=_04pkGgAAAAJ&hl=zh-CN), [Ziwei Liu](https://liuziwei7.github.io/), [Yuan-Fang Li](https://users.monash.edu/~yli/), [Cunjian Chen](https://cunjian.github.io/), [Yu Qiao](https://scholar.google.com.hk/citations?user=gFtI-8QAAAAJ&hl=zh-CN)
24
+ > (*Corresponding Author & Project Lead)
25
+ <!-- > <br>Monash University, Shanghai Artificial Intelligence Laboratory,<br> NJUPT, S-Lab, Nanyang Technological University
26
+
27
+ We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.
28
+
29
+ ![The architecture of Latte](visuals/architecture.svg){width=20}
30
+ -->
31
+
32
+ <!--
33
+ <div align="center">
34
+ <img src="visuals/architecture.svg" width="650">
35
+ </div>
36
+
37
+ This repository contains:
38
+
39
+ * 🪐 A simple PyTorch [implementation](models/latte.py) of Latte
40
+ * ⚡️ **Pre-trained Latte models** trained on FaceForensics, SkyTimelapse, Taichi-HD and UCF101 (256x256). In addition, we provide a T2V checkpoint (512x512). All checkpoints can be found [here](https://huggingface.co/maxin-cn/Latte/tree/main).
41
+
42
+ * 🛸 A Latte [training script](train.py) using PyTorch DDP.
43
+ -->
44
+
45
+ <video controls loop src="https://github.com/Vchitect/Latte/assets/7929326/a650cd84-2378-4303-822b-56a441e1733b" type="video/mp4"></video>
46
+
47
+ ## News
48
+ - (🔥 New) **Jul 11, 2024** 💥 **Latte-1 is now integrated into [diffusers](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/latte_transformer_3d.py). Thanks to [@yiyixuxu](https://github.com/yiyixuxu), [@sayakpaul](https://github.com/sayakpaul), [@a-r-r-o-w](https://github.com/a-r-r-o-w) and [@DN6](https://github.com/DN6).** You can easily run Latte using the following code. We also support inference with 4/8-bit quantization, which can reduce GPU memory from 17 GB to 9 GB. Please refer to this [tutorial](docs/latte_diffusers.md) for more information.
49
+
50
+ ```
51
+ from diffusers import LattePipeline
52
+ from diffusers.models import AutoencoderKLTemporalDecoder
53
+ from torchvision.utils import save_image
54
+ import torch
55
+ import imageio
56
+
57
+ torch.manual_seed(0)
58
+
59
+ device = "cuda" if torch.cuda.is_available() else "cpu"
60
+ video_length = 16 # 1 (text-to-image) or 16 (text-to-video)
61
+ pipe = LattePipeline.from_pretrained("maxin-cn/Latte-1", torch_dtype=torch.float16).to(device)
62
+
63
+ # Using temporal decoder of VAE
64
+ vae = AutoencoderKLTemporalDecoder.from_pretrained("maxin-cn/Latte-1", subfolder="vae_temporal_decoder", torch_dtype=torch.float16).to(device)
65
+ pipe.vae = vae
66
+
67
+ prompt = "a cat wearing sunglasses and working as a lifeguard at pool."
68
+ videos = pipe(prompt, video_length=video_length, output_type='pt').frames.cpu()
69
+ ```
70
+
71
+ - (🔥 New) **May 23, 2024** 💥 **Latte-1** is released! Pre-trained model can be downloaded [here](https://huggingface.co/maxin-cn/Latte-1/tree/main/transformer). **We support both T2V and T2I**. Please run `bash sample/t2v.sh` and `bash sample/t2i.sh` respectively.
72
+
73
+ <!--
74
+ <div align="center">
75
+ <img src="visuals/latteT2V.gif" width=88%>
76
+ </div>
77
+ -->
78
+
79
+ - (🔥 New) **Feb 24, 2024** 💥 We are very grateful that researchers and developers like our work. We will continue to update our LatteT2V model, hoping that our efforts can help the community develop. Our Latte discord channel <a href="https://discord.gg/RguYqhVU92" style="text-decoration:none;">
80
+ <img src="https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png" width="3%" alt="" /></a> is created for discussions. Coders are welcome to contribute.
81
+
82
+ - (🔥 New) **Jan 9, 2024** 💥 An updated LatteT2V model initialized with the [PixArt-α](https://github.com/PixArt-alpha/PixArt-alpha) is released, the checkpoint can be found [here](https://huggingface.co/maxin-cn/Latte-0/tree/main/transformer).
83
+
84
+ - (🔥 New) **Oct 31, 2023** 💥 The training and inference code is released. All checkpoints (including FaceForensics, SkyTimelapse, UCF101, and Taichi-HD) can be found [here](https://huggingface.co/maxin-cn/Latte/tree/main). In addition, the LatteT2V inference code is provided.
85
+
86
+
87
+ ## Setup
88
+
89
+ First, download and set up the repo:
90
+
91
+ ```bash
92
+ git clone https://github.com/Vchitect/Latte
93
+ cd Latte
94
+ ```
95
+
96
+ We provide an [`environment.yml`](environment.yml) file that can be used to create a Conda environment. If you only want
97
+ to run pre-trained models locally on CPU, you can remove the `cudatoolkit` and `pytorch-cuda` requirements from the file.
98
+
99
+ ```bash
100
+ conda env create -f environment.yml
101
+ conda activate latte
102
+ ```
103
+
104
+
105
+ ## Sampling
106
+
107
+ You can sample from our **pre-trained Latte models** with [`sample.py`](sample/sample.py). Weights for our pre-trained Latte model can be found [here](https://huggingface.co/maxin-cn/Latte). The script has various arguments to adjust sampling steps, change the classifier-free guidance scale, etc. For example, to sample from our model on FaceForensics, you can use:
108
+
109
+ ```bash
110
+ bash sample/ffs.sh
111
+ ```
112
+
113
+ or if you want to sample hundreds of videos, you can use the following script with Pytorch DDP:
114
+
115
+ ```bash
116
+ bash sample/ffs_ddp.sh
117
+ ```
118
+
119
+ If you want to try generating videos from text, just run `bash sample/t2v.sh`. All related checkpoints will download automatically.
120
+
121
+ If you would like to measure the quantitative metrics of your generated results, please refer to [here](docs/datasets_evaluation.md).
122
+
123
+ ## Training
124
+
125
+ We provide a training script for Latte in [`train.py`](train.py). The structure of the datasets can be found [here](docs/datasets_evaluation.md). This script can be used to train class-conditional and unconditional
126
+ Latte models. To launch Latte (256x256) training with `N` GPUs on the FaceForensics dataset
127
+ :
128
+
129
+ ```bash
130
+ torchrun --nnodes=1 --nproc_per_node=N train.py --config ./configs/ffs/ffs_train.yaml
131
+ ```
132
+
133
+ or If you have a cluster that uses slurm, you can also train Latte's model using the following scripts:
134
+
135
+ ```bash
136
+ sbatch slurm_scripts/ffs.slurm
137
+ ```
138
+
139
+ We also provide the video-image joint training scripts [`train_with_img.py`](train_with_img.py). Similar to [`train.py`](train.py) scripts, these scripts can be also used to train class-conditional and unconditional
140
+ Latte models. For example, if you want to train the Latte model on the FaceForensics dataset, you can use:
141
+
142
+ ```bash
143
+ torchrun --nnodes=1 --nproc_per_node=N train_with_img.py --config ./configs/ffs/ffs_img_train.yaml
144
+ ```
145
+
146
+ ## Contact Us
147
+ **Yaohui Wang**: [wangyaohui@pjlab.org.cn](mailto:wangyaohui@pjlab.org.cn)
148
+ **Xin Ma**: [xin.ma1@monash.edu](mailto:xin.ma1@monash.edu)
149
+
150
+ ## Citation
151
+ If you find this work useful for your research, please consider citing it.
152
+ ```bibtex
153
+ @article{ma2024latte,
154
+ title={Latte: Latent Diffusion Transformer for Video Generation},
155
+ author={Ma, Xin and Wang, Yaohui and Jia, Gengyun and Chen, Xinyuan and Liu, Ziwei and Li, Yuan-Fang and Chen, Cunjian and Qiao, Yu},
156
+ journal={arXiv preprint arXiv:2401.03048},
157
+ year={2024}
158
+ }
159
+ ```
160
+
161
+
162
+ ## Acknowledgments
163
+ Latte has been greatly inspired by the following amazing works and teams: [DiT](https://github.com/facebookresearch/DiT) and [PixArt-α](https://github.com/PixArt-alpha/PixArt-alpha), we thank all the contributors for open-sourcing.
164
+
165
+
166
+ ## License
167
+ The code and model weights are licensed under [LICENSE](LICENSE).
demo.py CHANGED
@@ -18,7 +18,7 @@ import os, sys
18
  sys.path.append(os.path.split(sys.path[0])[0])
19
  from sample.pipeline_latte import LattePipeline
20
  from models import get_models
21
- # import imageio
22
  from torchvision.utils import save_image
23
  import spaces
24
 
@@ -32,8 +32,6 @@ torch.set_grad_enabled(False)
32
  device = "cuda" if torch.cuda.is_available() else "cpu"
33
 
34
  transformer_model = get_models(args).to(device, dtype=torch.float16)
35
- # state_dict = find_model(args.ckpt)
36
- # msg, unexp = transformer_model.load_state_dict(state_dict, strict=False)
37
 
38
  if args.enable_vae_temporal_decoder:
39
  vae = AutoencoderKLTemporalDecoder.from_pretrained(args.pretrained_model_path, subfolder="vae_temporal_decoder", torch_dtype=torch.float16).to(device)
@@ -144,7 +142,8 @@ def gen_video(text_input, sample_method, scfg_scale, seed, height, width, video_
144
  ).video
145
 
146
  save_path = args.save_img_path + 'temp' + '.mp4'
147
- torchvision.io.write_video(save_path, videos[0], fps=8)
 
148
  return save_path
149
 
150
 
@@ -276,9 +275,26 @@ with gr.Blocks() as demo:
276
  run = gr.Button("💭Run")
277
  # with gr.Column(scale=0.5, min_width=0):
278
  # clear = gr.Button("🔄Clear️")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
279
 
280
  run.click(gen_video, [text_input, sample_method, scfg_scale, seed, height, width, video_length, diffusion_step], [output])
281
 
282
  demo.launch(debug=False, share=True)
283
-
284
- # demo.launch(server_name="0.0.0.0", server_port=10034, enable_queue=True)
 
18
  sys.path.append(os.path.split(sys.path[0])[0])
19
  from sample.pipeline_latte import LattePipeline
20
  from models import get_models
21
+ import imageio
22
  from torchvision.utils import save_image
23
  import spaces
24
 
 
32
  device = "cuda" if torch.cuda.is_available() else "cpu"
33
 
34
  transformer_model = get_models(args).to(device, dtype=torch.float16)
 
 
35
 
36
  if args.enable_vae_temporal_decoder:
37
  vae = AutoencoderKLTemporalDecoder.from_pretrained(args.pretrained_model_path, subfolder="vae_temporal_decoder", torch_dtype=torch.float16).to(device)
 
142
  ).video
143
 
144
  save_path = args.save_img_path + 'temp' + '.mp4'
145
+ # torchvision.io.write_video(save_path, videos[0], fps=8)
146
+ imageio.mimwrite(save_path, videos[0], fps=8, quality=7)
147
  return save_path
148
 
149
 
 
275
  run = gr.Button("💭Run")
276
  # with gr.Column(scale=0.5, min_width=0):
277
  # clear = gr.Button("🔄Clear️")
278
+
279
+ EXAMPLES = [
280
+ ["3D animation of a small, round, fluffy creature with big, expressive eyes explores a vibrant, enchanted forest. The creature, a whimsical blend of a rabbit and a squirrel, has soft blue fur and a bushy, striped tail. It hops along a sparkling stream, its eyes wide with wonder. The forest is alive with magical elements: flowers that glow and change colors, trees with leaves in shades of purple and silver, and small floating lights that resemble fireflies. The creature stops to interact playfully with a group of tiny, fairy-like beings dancing around a mushroom ring. The creature looks up in awe at a large, glowing tree that seems to be the heart of the forest.", "DDIM", 7.5, 100, 512, 512, 16, 50],
281
+ ["A grandmother with neatly combed grey hair stands behind a colorful birthday cake with numerous candles at a wood dining room table, expression is one of pure joy and happiness, with a happy glow in her eye. She leans forward and blows out the candles with a gentle puff, the cake has pink frosting and sprinkles and the candles cease to flicker, the grandmother wears a light blue blouse adorned with floral patterns, several happy friends and family sitting at the table can be seen celebrating, out of focus. The scene is beautifully captured, cinematic, showing a 3/4 view of the grandmother and the dining room. Warm color tones and soft lighting enhance the mood.", "DDIM", 7.5, 100, 512, 512, 16, 50],
282
+ ["A wizard wearing a pointed hat and a blue robe with white stars casting a spell that shoots lightning from his hand and holding an old tome in his other hand.", "DDIM", 7.5, 100, 512, 512, 16, 50],
283
+ ["A young man at his 20s is sitting on a piece of cloud in the sky, reading a book.", "DDIM", 7.5, 100, 512, 512, 16, 50],
284
+ ["Cinematic trailer for a group of samoyed puppies learning to become chefs.", "DDIM", 7.5, 100, 512, 512, 16, 50],
285
+ ["Drone view of waves crashing against the rugged cliffs along Big Sur’s garay point beach. The crashing blue waters create white-tipped waves, while the golden light of the setting sun illuminates the rocky shore. A small island with a lighthouse sits in the distance, and green shrubbery covers the cliff’s edge. The steep drop from the road down to the beach is a dramatic feat, with the cliff’s edges jutting out over the sea. This is a view that captures the raw beauty of the coast and the rugged landscape of the Pacific Coast Highway.", "DDIM", 7.5, 100, 512, 512, 16, 50],
286
+ ["A cyborg koala dj in front of aturntable, in heavy raining futuristic tokyo rooftop cyberpunk night, sci-f, fantasy, intricate, neon light, soft light smooth, sharp focus, illustration.", "DDIM", 7.5, 100, 512, 512, 16, 50],
287
+ ]
288
+
289
+ examples = gr.Examples(
290
+ examples = EXAMPLES,
291
+ fn = gen_video,
292
+ inputs=[text_input, sample_method, scfg_scale, seed, height, width, video_length, diffusion_step],
293
+ outputs=[output],
294
+ # cache_examples=True,
295
+ cache_examples="lazy",
296
+ )
297
 
298
  run.click(gen_video, [text_input, sample_method, scfg_scale, seed, height, width, video_length, diffusion_step], [output])
299
 
300
  demo.launch(debug=False, share=True)
 
 
gradio_cached_examples/41/component 0/2ccc9ce6c64b94957f04/.nfsb23b4a76308e968a0000914a ADDED
Binary file (407 kB). View file
 
gradio_cached_examples/41/component 0/2ccc9ce6c64b94957f04/t2v-temp.mp4 ADDED
Binary file (246 kB). View file
 
gradio_cached_examples/41/component 0/3db6fb8d8fce26e8e971/t2v-temp.mp4 ADDED
Binary file (432 kB). View file
 
gradio_cached_examples/41/component 0/5167a9eca57b2e5c60e6/t2v-temp.mp4 ADDED
Binary file (893 kB). View file
 
gradio_cached_examples/41/component 0/522c07b86b97831454fc/.nfs33beb40e32875f380000914b ADDED
Binary file (145 kB). View file
 
gradio_cached_examples/41/component 0/522c07b86b97831454fc/t2v-temp.mp4 ADDED
Binary file (60 kB). View file
 
gradio_cached_examples/41/component 0/889fbc1a8103cc0838df/.nfs9ba939f82a5a21800000914c ADDED
Binary file (448 kB). View file
 
gradio_cached_examples/41/component 0/889fbc1a8103cc0838df/t2v-temp.mp4 ADDED
Binary file (450 kB). View file
 
gradio_cached_examples/41/component 0/8d6c4a965ec138d78166/.nfs8d6ce3864023a73c0000914d ADDED
Binary file (256 kB). View file
 
gradio_cached_examples/41/component 0/8d6c4a965ec138d78166/t2v-temp.mp4 ADDED
Binary file (75.3 kB). View file
 
gradio_cached_examples/41/component 0/c652422165f22c101406/.nfs6ea8fb4ca807303e0000914e ADDED
Binary file (301 kB). View file
 
gradio_cached_examples/41/component 0/c652422165f22c101406/t2v-temp.mp4 ADDED
Binary file (80.5 kB). View file
 
gradio_cached_examples/41/indices.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ 0
2
+ 1
3
+ 2
4
+ 3
5
+ 4
6
+ 5
7
+ 6
gradio_cached_examples/41/log.csv ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ component 0,flag,username,timestamp
2
+ "{""video"": {""path"": ""gradio_cached_examples/41/component 0/2ccc9ce6c64b94957f04/t2v-temp.mp4"", ""url"": ""/file=/data/pe1/000scratch/slurm_tmpdir/20240727_job_53250001.VBWa/gradio/4fc6805067c5fc560fcbdf135af1f8d9bf6df508/t2v-temp.mp4"", ""size"": null, ""orig_name"": ""t2v-temp.mp4"", ""mime_type"": null, ""is_stream"": false, ""meta"": {""_type"": ""gradio.FileData""}}, ""subtitles"": null}",,,2024-07-27 15:16:50.157072
3
+ "{""video"": {""path"": ""gradio_cached_examples/41/component 0/c652422165f22c101406/t2v-temp.mp4"", ""url"": ""/file=/data/pe1/000scratch/slurm_tmpdir/20240727_job_53250001.VBWa/gradio/e5e2c7576a1ebf37502ec48a45456b24fe70d2bc/t2v-temp.mp4"", ""size"": null, ""orig_name"": ""t2v-temp.mp4"", ""mime_type"": null, ""is_stream"": false, ""meta"": {""_type"": ""gradio.FileData""}}, ""subtitles"": null}",,,2024-07-27 15:17:33.646700
4
+ "{""video"": {""path"": ""gradio_cached_examples/41/component 0/3db6fb8d8fce26e8e971/t2v-temp.mp4"", ""url"": ""/file=/data/pe1/000scratch/slurm_tmpdir/20240727_job_53250001.VBWa/gradio/0ca05017eb2eb99d02b46da72cccff8a01a43fef/t2v-temp.mp4"", ""size"": null, ""orig_name"": ""t2v-temp.mp4"", ""mime_type"": null, ""is_stream"": false, ""meta"": {""_type"": ""gradio.FileData""}}, ""subtitles"": null}",,,2024-07-27 15:18:14.910061
5
+ "{""video"": {""path"": ""gradio_cached_examples/41/component 0/522c07b86b97831454fc/t2v-temp.mp4"", ""url"": ""/file=/data/pe1/000scratch/slurm_tmpdir/20240727_job_53250001.VBWa/gradio/fcbe9334e8f526232bd9f91fb84c1e21696ec209/t2v-temp.mp4"", ""size"": null, ""orig_name"": ""t2v-temp.mp4"", ""mime_type"": null, ""is_stream"": false, ""meta"": {""_type"": ""gradio.FileData""}}, ""subtitles"": null}",,,2024-07-27 15:18:57.457337
6
+ "{""video"": {""path"": ""gradio_cached_examples/41/component 0/8d6c4a965ec138d78166/t2v-temp.mp4"", ""url"": ""/file=/data/pe1/000scratch/slurm_tmpdir/20240727_job_53250001.VBWa/gradio/59b4ad6218f3d52337306a55201836a3e8bb6f35/t2v-temp.mp4"", ""size"": null, ""orig_name"": ""t2v-temp.mp4"", ""mime_type"": null, ""is_stream"": false, ""meta"": {""_type"": ""gradio.FileData""}}, ""subtitles"": null}",,,2024-07-27 15:19:34.907998
7
+ "{""video"": {""path"": ""gradio_cached_examples/41/component 0/5167a9eca57b2e5c60e6/t2v-temp.mp4"", ""url"": ""/file=/data/pe1/000scratch/slurm_tmpdir/20240727_job_53250001.VBWa/gradio/ab334488c97b4f1de08e80f979b73215852a706c/t2v-temp.mp4"", ""size"": null, ""orig_name"": ""t2v-temp.mp4"", ""mime_type"": null, ""is_stream"": false, ""meta"": {""_type"": ""gradio.FileData""}}, ""subtitles"": null}",,,2024-07-27 15:20:34.839324
8
+ "{""video"": {""path"": ""gradio_cached_examples/41/component 0/889fbc1a8103cc0838df/t2v-temp.mp4"", ""url"": ""/file=/data/pe1/000scratch/slurm_tmpdir/20240727_job_53250001.VBWa/gradio/546b693a38ce37823fe124af23709b6f9f49e088/t2v-temp.mp4"", ""size"": null, ""orig_name"": ""t2v-temp.mp4"", ""mime_type"": null, ""is_stream"": false, ""meta"": {""_type"": ""gradio.FileData""}}, ""subtitles"": null}",,,2024-07-27 15:21:16.519586
requirements.txt CHANGED
@@ -16,4 +16,5 @@ sentencepiece
16
  beautifulsoup4
17
  ftfy
18
  omegaconf
19
- spaces
 
 
16
  beautifulsoup4
17
  ftfy
18
  omegaconf
19
+ spaces
20
+ imageio-ffmpeg
sample_videos/t2v-temp.mp4 CHANGED
Binary files a/sample_videos/t2v-temp.mp4 and b/sample_videos/t2v-temp.mp4 differ