Edit model card

StableV2V: Stablizing Shape Consistency in Video-to-Video Editing

Chang Liu, Rui Li, Kaidong Zhang, Yunwei Lan, Dong Liu

[Paper] / [Project] / [GitHub] / [DAVIS-Edit]

Official pre-trained model weights of the paper titled "StableV2V: Stablizing Shape Consistency in Video-to-Video Editing".

Model Weights Structure

We construct our model weights following the structure below:

StableV2V
β”œβ”€β”€ controlnet-depth               <----- ControlNet (depth), required by CIG
β”œβ”€β”€ controlnet-scribble            <----- ControlNet (scribble, needed in sketch-based editing application)
β”œβ”€β”€ ctrl-adapter-i2vgenxl-depth    <----- Ctrl-Adapter (I2VGen-XL, depth), required by CIG
β”œβ”€β”€ i2vgenxl                       <----- I2VGen-XL, required by CIG
β”œβ”€β”€ instruct-pix2pix               <----- InstructPix2Pix, required by PFE
β”œβ”€β”€ paint-by-example               <----- Paint-by-Example, required by PFE
β”œβ”€β”€ stable-diffusion-v1-5-inpaint  <----- SD Inpaint, required by PFE
β”œβ”€β”€ stable-diffusion-v1.5          <----- SD v1.5, required by CIG
β”œβ”€β”€ 50000.ckpt                     <----- Shape-guided depth refinement network
β”œβ”€β”€ README.md
β”œβ”€β”€ dpt_swin2_large_384.pt         <----- MiDaS, required by ISA
β”œβ”€β”€ raft-things.pth                <----- RAFT, required by ISA
β”œβ”€β”€ u2net.pth                      <----- U2-net, required by ISA
└── 50000.ckpt                     <----- Shape-guided depth refinement network, required by ISA
Downloads last month
65
Inference API
Unable to determine this model’s pipeline type. Check the docs .