Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: diffusers
|
3 |
+
pipeline_tag: text-to-video
|
4 |
+
---
|
5 |
+
|
6 |
+
AnimateDiff is a method that allows you to create videos using pre-existing Stable Diffusion Text to Image models.
|
7 |
+
|
8 |
+
It achieves this by inserting motion module layers into a frozen text to image model and training it on video clips to extract a motion prior.
|
9 |
+
These motion modules are applied after the ResNet and Attention blocks in the Stable Diffusion UNet. Their purpose is to introduce coherent motion across image frames. To support these modules we introduce the concepts of a MotionAdapter and UNetMotionModel. These serve as a convenient way to use these motion modules with existing Stable Diffusion models.
|
10 |
+
|
11 |
+
<table>
|
12 |
+
<tr>
|
13 |
+
<td><center>
|
14 |
+
masterpiece, bestquality, sunset.
|
15 |
+
<br>
|
16 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-realistic-doc.gif"
|
17 |
+
alt="masterpiece, bestquality, sunset"
|
18 |
+
style="width: 300px;" />
|
19 |
+
</center></td>
|
20 |
+
</tr>
|
21 |
+
</table>
|
22 |
+
|
23 |
+
The following example demonstrates how you can utilize the motion modules with an existing Stable Diffusion text to image model.
|
24 |
+
|
25 |
+
```python
|
26 |
+
import torch
|
27 |
+
from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler
|
28 |
+
from diffusers.utils import export_to_gif
|
29 |
+
|
30 |
+
# Load the motion adapter
|
31 |
+
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-4")
|
32 |
+
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
|
33 |
+
pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter)
|
34 |
+
scheduler = DDIMScheduler.from_pretrained(
|
35 |
+
model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", steps_offset=1
|
36 |
+
)
|
37 |
+
pipe.scheduler = scheduler
|
38 |
+
|
39 |
+
# enable memory savings
|
40 |
+
pipe.enable_vae_slicing()
|
41 |
+
pipe.enable_model_cpu_offload()
|
42 |
+
|
43 |
+
output = pipe(
|
44 |
+
prompt=(
|
45 |
+
"masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
|
46 |
+
"orange sky, warm lighting, fishing boats, ocean waves seagulls, "
|
47 |
+
"rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
|
48 |
+
"golden hour, coastal landscape, seaside scenery"
|
49 |
+
),
|
50 |
+
negative_prompt="bad quality, worse quality",
|
51 |
+
num_frames=16,
|
52 |
+
guidance_scale=7.5,
|
53 |
+
num_inference_steps=25,
|
54 |
+
generator=torch.Generator("cpu").manual_seed(42),
|
55 |
+
)
|
56 |
+
frames = output.frames[0]
|
57 |
+
export_to_gif(frames, "animation.gif")
|
58 |
+
```
|
59 |
+
|
60 |
+
<Tip>
|
61 |
+
|
62 |
+
AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting `clip_sample=False` in the scheduler as this can also have an adverse effect on generated samples.
|
63 |
+
|
64 |
+
</Tip>
|