|
--- |
|
library_name: diffusers |
|
--- |
|
|
|
This is a development model meant to help test the HunyuanVideoPipeline integration to diffusers. Please help out if you can on the [PR](https://github.com/huggingface/diffusers/pull/10136). |
|
|
|
|
|
```bash |
|
pip install -qq git+https://github.com/huggingface/diffusers.git@hunyuan-video |
|
``` |
|
|
|
```python |
|
import torch |
|
pipe = HunyuanVideoPipeline.from_pretrained( |
|
"magespace/hyvideo-diffusers-dev", |
|
torch_dtype=torch.bfloat16, |
|
) |
|
``` |
|
|
|
Post-processing: |
|
|
|
```python |
|
import PIL.Image |
|
from diffusers.utils import export_to_video |
|
|
|
export_to_video(result.frames[0], "output.mp4", fps=24) |
|
``` |
|
|
|
For faster generation, you can optimize the `transformer` with `torch.compile`. Additionally, increasing `shift` in the scheduler can allow for lower step values as shown in the original paper. |
|
|
|
Generation time is quadratic with the number of pixels, so reducing the height and width and decreasing the number of frames will drastically speed up generation at the price of video quality. |