svjack commited on
Commit
586fbd1
1 Parent(s): 3cb842c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -1
README.md CHANGED
@@ -258,4 +258,115 @@ This will create a grid image showing the original, Canny edge detection, and tr
258
  <p style="text-align: center;">派蒙</p>
259
  </div>
260
  </div>
261
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
258
  <p style="text-align: center;">派蒙</p>
259
  </div>
260
  </div>
261
+ </div>
262
+
263
+ ### Generating an Animation of Zhongli
264
+ Here's an example of how to generate an animation of Zhongli using the `AnimateDiffSDXLPipeline`:
265
+
266
+ ```python
267
+ import torch
268
+ from diffusers.models import MotionAdapter
269
+ from diffusers import AnimateDiffSDXLPipeline, DDIMScheduler
270
+ from diffusers.utils import export_to_gif
271
+
272
+ adapter = MotionAdapter.from_pretrained(
273
+ "a-r-r-o-w/animatediff-motion-adapter-sdxl-beta", torch_dtype=torch.float16
274
+ )
275
+
276
+ model_id = "svjack/GenshinImpact_XL_Base"
277
+ scheduler = DDIMScheduler.from_pretrained(
278
+ model_id,
279
+ subfolder="scheduler",
280
+ clip_sample=False,
281
+ timestep_spacing="linspace",
282
+ beta_schedule="linear",
283
+ steps_offset=1,
284
+ )
285
+
286
+ pipe = AnimateDiffSDXLPipeline.from_pretrained(
287
+ model_id,
288
+ motion_adapter=adapter,
289
+ scheduler=scheduler,
290
+ torch_dtype=torch.float16,
291
+ ).to("cuda")
292
+
293
+ # enable memory savings
294
+ pipe.enable_vae_slicing()
295
+ pipe.enable_vae_tiling()
296
+
297
+ output = pipe(
298
+ prompt="solo,ZHONGLI\(genshin impact\),1boy,portrait,upper_body,highres, keep eyes forward.",
299
+ negative_prompt="low quality, worst quality",
300
+ num_inference_steps=20,
301
+ guidance_scale=8,
302
+ width=1024,
303
+ height=1024,
304
+ num_frames=16,
305
+ generator=torch.manual_seed(4),
306
+ )
307
+ frames = output.frames[0]
308
+ export_to_gif(frames, "zhongli_animation.gif")
309
+
310
+ from diffusers.utils import export_to_video
311
+ export_to_video(frames, "zhongli_animation.mp4")
312
+ from IPython import display
313
+ display.Video("zhongli_animation.mp4", width=512, height=512)
314
+ ```
315
+
316
+ ##### Enhancing Animation with RIFE
317
+ To enhance the animation using RIFE (Real-Time Intermediate Flow Estimation):
318
+
319
+ ```bash
320
+ git clone https://github.com/svjack/Practical-RIFE && cd Practical-RIFE && pip install -r requirements.txt
321
+ python inference_video.py --multi=128 --video=../zhongli_animation.mp4
322
+ ```
323
+
324
+ ##### Merging Videos Horizontally
325
+ You can merge two videos horizontally using the following function:
326
+
327
+ ```python
328
+ from moviepy.editor import VideoFileClip, CompositeVideoClip
329
+
330
+ def merge_videos_horizontally(video_path1, video_path2, output_video_path):
331
+ clip1 = VideoFileClip(video_path1)
332
+ clip2 = VideoFileClip(video_path2)
333
+
334
+ max_duration = max(clip1.duration, clip2.duration)
335
+
336
+ if clip1.duration < max_duration:
337
+ clip1 = clip1.loop(duration=max_duration)
338
+ if clip2.duration < max_duration:
339
+ clip2 = clip2.loop(duration=max_duration)
340
+
341
+ total_width = clip1.w + clip2.w
342
+ total_height = max(clip1.h, clip2.h)
343
+
344
+ final_clip = CompositeVideoClip([
345
+ clip1.set_position(("left", "center")),
346
+ clip2.set_position(("right", "center"))
347
+ ], size=(total_width, total_height))
348
+
349
+ final_clip.write_videofile(output_video_path, codec='libx264')
350
+
351
+ print(f"Merged video saved to {output_video_path}")
352
+
353
+ # Example usage
354
+ video_path1 = "zhongli_animation.mp4"
355
+ video_path2 = "zhongli_animation_128X_1280fps_wrt.mp4"
356
+ output_video_path = "zhongli_inter_video_compare.mp4"
357
+ merge_videos_horizontally(video_path1, video_path2, output_video_path)
358
+ ```
359
+
360
+
361
+
362
+ <div>
363
+ <b><h3 style="text-align: center;">Left is zhongli_animation.mp4, Right is zhongli_animation_128X_1280fps_wrt.mp4</h3></b>
364
+ <div style="display: flex; flex-direction: column; align-items: center;">
365
+ <div style="margin-bottom: 10px;">
366
+ <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/AgdsshSX-Dt5ObeAkjmby.mp4"></video>
367
+ <p style="text-align: center;">钟离</p>
368
+ </div>
369
+ </div>
370
+ </div>
371
+
372
+