Post
3701
Testing new pix2pix-Turbo in real-time, very interesting GAN architecture that leverages SD-Turbo model. Here I'm using edge2image LoRA single-step inference 🤯
It's very interesting how ControlNet Canny quality is comparable, but in a single step. Looking forward to when they release the code: https://github.com/GaParmar/img2img-turbo/issues/1
I've been keeping a list of fast diffusion model pipelines together with this real-time websocket app. Have a look if you want to test it locally, or check out the demo here on Spaces.
radames/real-time-pix2pix-turbo
Github app:
https://github.com/radames/Real-Time-Latent-Consistency-Model/
You can also check the authors img2img sketch model here
gparmar/img2img-turbo-sketch
Refs:
One-Step Image Translation with Text-to-Image Models (2403.12036)
cc @gparmar @junyanz
It's very interesting how ControlNet Canny quality is comparable, but in a single step. Looking forward to when they release the code: https://github.com/GaParmar/img2img-turbo/issues/1
I've been keeping a list of fast diffusion model pipelines together with this real-time websocket app. Have a look if you want to test it locally, or check out the demo here on Spaces.
radames/real-time-pix2pix-turbo
Github app:
https://github.com/radames/Real-Time-Latent-Consistency-Model/
You can also check the authors img2img sketch model here
gparmar/img2img-turbo-sketch
Refs:
One-Step Image Translation with Text-to-Image Models (2403.12036)
cc @gparmar @junyanz