Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: image-to-video
|
3 |
+
---
|
4 |
+
# AnimateLCM-I2V for Fast Image-conditioned Video Generation in 4 steps.
|
5 |
+
|
6 |
+
AnimateLCM-I2V is a latent image-to-video consistency model finetuned with [AnimateLCM](https://huggingface.co/wangfuyun/AnimateLCM) following the strategy proposed in [AnimateLCM-paper](https://arxiv.org/abs/2402.00769) without requiring teacher models.
|
7 |
+
|
8 |
+
[AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning](https://arxiv.org/abs/2402.00769) by Fu-Yun Wang et al.
|
9 |
+
|
10 |
+
## Example-Video
|
11 |
+
|
12 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63e9e92f20c109718713f5eb/P3rcJbtTKYVnBfufZ_OVg.png)
|
13 |
+
|
14 |
+
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63e9e92f20c109718713f5eb/SMZ4DAinSnrxKsVEW8dio.mp4"></video>
|
15 |
+
|
16 |
+
|
17 |
+
For more details, please refer to our [[paper](https://arxiv.org/abs/2402.00769)] | [[code](https://github.com/G-U-N/AnimateLCM)] | [[proj-page](https://animatelcm.github.io/)] | [[civitai](https://civitai.com/models/290375/animatelcm-fast-video-generation)].
|
18 |
+
|
19 |
+
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63e9e92f20c109718713f5eb/KCwSoZCdxkkmtDg1LuXsP.mp4"></video>
|
20 |
+
|