Can this be used as-is?
I'm a little confused after reading the linked ControlNet++ GitHub repository. From there, I read that a "control type id" is required for inference (for the right model to be used). Is this just for ControlNet++? Can this model be loaded with a regular Diffusers pipeline (like the ones SD.Next uses) and used as-is? Or it's important to use the pipeline supplied in the GH repo?
Can the README be amended to clarify?
Yes, it is just for ControlNet++, control type id must be feed into the network to tell the network which control it use, because it is a new architecture, so the pipeline in diffusers can't be used at once. It need to be changed a little but not much. I will provide the corresponding new pipeline such as img2img_controlnet_union, inpainting_controlnet_union soon.
I'm a little confused after reading the linked ControlNet++ GitHub repository. From there, I read that a "control type id" is required for inference (for the right model to be used). Is this just for ControlNet++? Can this model be loaded with a regular Diffusers pipeline (like the ones SD.Next uses) and used as-is? Or it's important to use the pipeline supplied in the GH repo?
Can the README be amended to clarify?
how do u use ControlNet++ with diffusers?
I've tried fidgeting around a little in the controlnet extension of A1111, and it seems like the model works quite well with raw images as control images. What I mean by this is, the model seems to understand well the image, even if there is no canny, depth map, or any other preprocessor applied, and the image is fed as-is. What's more, from my testing, the model does work well if you remove the transformer block entirely and don't concat anything to begin with.
So, again, from my own testing, it seems like you CAN use this controlnet model without feeding the ID vector and such, even though it was not designed for that.
I'm still investigating why it works and if something else is at play, so maybe take this with a grain of salt.