Update README.md
Browse files
README.md
CHANGED
@@ -117,7 +117,6 @@ library_name: diffusers
|
|
117 |
| `--first_frame` | [Required] First-frame image input for image-to-video generation. |
|
118 |
| `--last_frame` | [Optional] If provided, the model will generate intermediate video content based on the specified first and last frame images. |
|
119 |
| `--enable_cpu_offload` | [Optional] Offload the model into CPU for less GPU memory cost (about 9.3G, compared to 27.5G if CPU offload is not enabled), but the inference time will increase significantly. |
|
120 |
-
|
121 |
5. **(Optional) Interpolate the video to 30 FPS**
|
122 |
|
123 |
- It is recommended to use [EMA-VFI](https://github.com/MCG-NJU/EMAVFI) to interpolate the video from 15 FPS to 30 FPS.
|
|
|
117 |
| `--first_frame` | [Required] First-frame image input for image-to-video generation. |
|
118 |
| `--last_frame` | [Optional] If provided, the model will generate intermediate video content based on the specified first and last frame images. |
|
119 |
| `--enable_cpu_offload` | [Optional] Offload the model into CPU for less GPU memory cost (about 9.3G, compared to 27.5G if CPU offload is not enabled), but the inference time will increase significantly. |
|
|
|
120 |
5. **(Optional) Interpolate the video to 30 FPS**
|
121 |
|
122 |
- It is recommended to use [EMA-VFI](https://github.com/MCG-NJU/EMAVFI) to interpolate the video from 15 FPS to 30 FPS.
|