license: openrail
ControlNet-XS model for StableDiffusion 2.1 and depth input
🔬 Original paper and models by https://github.com/vislearn/ControlNet-XS
👷🏽♂️ Translated into diffusers architecture by https://twitter.com/UmerHAdil
This model is trained for use with StableDiffusion 2.1
ControlNet-XS was introduced in ControlNet-XS by Denis Zavadski and Carsten Rother. It is based on the observation that the control model in the original ControlNet can be made much smaller and still produces good results.
As with the original ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.
Using ControlNet-XS instead of regular ControlNet will produce images of roughly the same quality, but 20-25% faster (see benchmark) and with ~45% less memory usage.
Other ControlNet-XS models: