Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,10 @@ pipeline_tag: text-to-video
|
|
3 |
license: other
|
4 |
license_name: tencent-hunyuan-community
|
5 |
license_link: LICENSE
|
|
|
|
|
|
|
|
|
6 |
---
|
7 |
|
8 |
<!-- ## **HunyuanVideo** -->
|
@@ -215,6 +219,37 @@ We list the height/width/frame settings we support in the following table.
|
|
215 |
| 540p | 544px960px129f | 960px544px129f | 624px832px129f | 832px624px129f | 720px720px129f |
|
216 |
| 720p (recommended) | 720px1280px129f | 1280px720px129f | 1104px832px129f | 832px1104px129f | 960px960px129f |
|
217 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
218 |
### Using Command Line
|
219 |
|
220 |
```bash
|
@@ -265,4 +300,4 @@ If you find [HunyuanVideo](https://arxiv.org/abs/2412.03603) useful for your res
|
|
265 |
|
266 |
## Acknowledgements
|
267 |
We would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [FLUX](https://github.com/black-forest-labs/flux), [Llama](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [Xtuner](https://github.com/InternLM/xtuner), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.
|
268 |
-
Additionally, we also thank the Tencent Hunyuan Multimodal team for their help with the text encoder.
|
|
|
3 |
license: other
|
4 |
license_name: tencent-hunyuan-community
|
5 |
license_link: LICENSE
|
6 |
+
library_name: diffusers
|
7 |
+
tags:
|
8 |
+
- video
|
9 |
+
- HunyuanVideoPipeline
|
10 |
---
|
11 |
|
12 |
<!-- ## **HunyuanVideo** -->
|
|
|
219 |
| 540p | 544px960px129f | 960px544px129f | 624px832px129f | 832px624px129f | 720px720px129f |
|
220 |
| 720p (recommended) | 720px1280px129f | 1280px720px129f | 1104px832px129f | 832px1104px129f | 960px960px129f |
|
221 |
|
222 |
+
### Using Diffusers
|
223 |
+
|
224 |
+
HunyuanVideo can be used directly from Diffusers. Install the latest version of Diffusers.
|
225 |
+
|
226 |
+
```python
|
227 |
+
import torch
|
228 |
+
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel
|
229 |
+
from diffusers.utils import export_to_video
|
230 |
+
|
231 |
+
model_id = "tencent/HunyuanVideo"
|
232 |
+
transformer = HunyuanVideoTransformer3DModel.from_pretrained(
|
233 |
+
model_id, subfolder="transformer", torch_dtype=torch.bfloat16
|
234 |
+
)
|
235 |
+
pipe = HunyuanVideoPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch.float16)
|
236 |
+
|
237 |
+
# Enable memory savings
|
238 |
+
pipe.vae.enable_tiling()
|
239 |
+
pipe.enable_model_cpu_offload()
|
240 |
+
|
241 |
+
output = pipe(
|
242 |
+
prompt="A cat walks on the grass, realistic",
|
243 |
+
height=320,
|
244 |
+
width=512,
|
245 |
+
num_frames=61,
|
246 |
+
num_inference_steps=30,
|
247 |
+
).frames[0]
|
248 |
+
export_to_video(output, "output.mp4", fps=15)
|
249 |
+
```
|
250 |
+
|
251 |
+
Refer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video) for more information.
|
252 |
+
|
253 |
### Using Command Line
|
254 |
|
255 |
```bash
|
|
|
300 |
|
301 |
## Acknowledgements
|
302 |
We would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [FLUX](https://github.com/black-forest-labs/flux), [Llama](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [Xtuner](https://github.com/InternLM/xtuner), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.
|
303 |
+
Additionally, we also thank the Tencent Hunyuan Multimodal team for their help with the text encoder.
|