Update README.md
Browse files
README.md
CHANGED
@@ -1,40 +1,22 @@
|
|
1 |
---
|
2 |
pipeline_tag: text-to-video
|
3 |
---
|
4 |
-
# AnimateLCM for Fast Video Generation in 4 steps.
|
5 |
-
|
6 |
-
[AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data](https://arxiv.org/abs/2402.00769) by Fu-Yun Wang et al.
|
7 |
-
|
8 |
-
## We also support fast image-to-video generation, please see [AnimateLCM-SVD-xt](https://huggingface.co/wangfuyun/AnimateLCM-SVD-xt) and [AnimateLCM-I2V](https://huggingface.co/wangfuyun/AnimateLCM-I2V).
|
9 |
-
|
10 |
-
For more details, please refer to our [[paper](https://arxiv.org/abs/2402.00769)] | [[code](https://github.com/G-U-N/AnimateLCM)] | [[proj-page](https://animatelcm.github.io/)] | [[civitai](https://civitai.com/models/290375/animatelcm-fast-video-generation)].
|
11 |
|
12 |
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63e9e92f20c109718713f5eb/KCwSoZCdxkkmtDg1LuXsP.mp4"></video>
|
13 |
|
14 |
## Using AnimateLCM with Diffusers
|
15 |
|
16 |
```python
|
17 |
-
import torch
|
18 |
-
from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter
|
19 |
-
from diffusers.utils import export_to_gif
|
20 |
-
|
21 |
-
adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM", torch_dtype=torch.float16)
|
22 |
-
pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter, torch_dtype=torch.float16)
|
23 |
-
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear")
|
24 |
|
25 |
-
|
26 |
-
|
|
|
|
|
|
|
27 |
|
28 |
pipe.enable_vae_slicing()
|
29 |
-
pipe.enable_model_cpu_offload()
|
30 |
|
31 |
-
output = pipe(
|
32 |
-
|
33 |
-
|
34 |
-
num_frames=16,
|
35 |
-
guidance_scale=2.0,
|
36 |
-
num_inference_steps=6,
|
37 |
-
generator=torch.Generator("cpu").manual_seed(0),
|
38 |
-
)
|
39 |
-
frames = output.frames[0]
|
40 |
-
export_to_gif(frames, "animatelcm.gif")
|
|
|
1 |
---
|
2 |
pipeline_tag: text-to-video
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
5 |
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63e9e92f20c109718713f5eb/KCwSoZCdxkkmtDg1LuXsP.mp4"></video>
|
6 |
|
7 |
## Using AnimateLCM with Diffusers
|
8 |
|
9 |
```python
|
10 |
+
import torch, torch_xla, diffusers
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
+
motion_adapter = diffusers.MotionAdapter.from_pretrained('chaowenguo/AnimateLCM', torch_dtype=torch.bfloat16, variant='fp16', use_safetensors=True)
|
13 |
+
vae = diffusers.AutoencoderKL.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/vae-ft-mse-840000-ema-pruned.safetensors', torch_dtype=torch.bfloat16, use_safetensors=True)
|
14 |
+
pipe = diffusers.AnimateDiffPipeline.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/chilloutMix-Ni.safetensors', config='chaowenguo/stable-diffusion-v1-5', safety_checker=None, use_safetensors=True, torch_dtype=torch.bfloat16, motion_adapter=motion_adapter, vae=vae).to(torch_xla.core.xla_model.xla_device())
|
15 |
+
pipe.scheduler = diffusers.LCMScheduler.from_config(pipe.scheduler.config, beta_schedule='linear')
|
16 |
+
pipe.load_lora_weights('chaowenguo/AnimateLCM', weight_name='AnimateLCM_sd15_t2v_lora.safetensors')
|
17 |
|
18 |
pipe.enable_vae_slicing()
|
|
|
19 |
|
20 |
+
output = pipe(prompt="A full body gorgeous smiling slim young cleavage robust boob japanese girl, wearing white deep V bandeau pantie, lying on back on white bed, befautiful face, hands with five fingers, best quality, extremely detailed, HD, ultra-realistic, 8K, HQ, masterpiece, trending on artstation, art, smooth", negative_prompt="nipple, dudou, shirt, shawl, hat, sock, sleeve, monochrome, dark background, longbody, lowres, bad anatomy, bad hands, fused fingers, missing fingers, too many fingers, extra digit, fewer difits, cropped, worst quality, low quality, deformed body, bloated, ugly, unrealistic, extra hands and arms", height=912, num_frames=16, guidance_scale=3, num_inference_steps=8, generator=torch.manual_seed(0), context_frames=8)
|
21 |
+
diffuser.utils.export_to_video(output.frames[0], "animatelcm.mp4")
|
22 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|