Can this model be used in StableDiffusionXLControlNetImg2ImgPipeline?

#8
by ElectricGoal - opened

I have a question that if model can be used in StableDiffusionXLControlNetImg2ImgPipeline, does it need to create control_img like the example or just past image directly to 2 args: image and control_image in pipeline.

Here is my code:

vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
controlnet = ControlNetModel.from_pretrained('xinsir/controlnet-tile-sdxl-1.0', torch_dtype=torch.float16, device=device)
base = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
vae=vae,
controlnet=controlnet,
variant="fp16",
use_safetensors=True,
safety_checker=None,
scheduler=eulera_scheduler,
).to(device)

image = Image.open('...')
image = image.resize((1024, 1024))
control_img = ...like your example
base_output = base(
prompt=control_prompt,
image=image,
control_image=image, # Question: control_img or image ?
).images[0]

ElectricGoal changed discussion title from Does this model can be used in StableDiffusionXLControlNetImg2ImgPipeline to Can this model be used in StableDiffusionXLControlNetImg2ImgPipeline?

Sign up or log in to comment