with gr.Box(visible=os.environ.get("SPACE_ID")): if os.environ.get("SPACE_ID") and str(os.environ.get("IS_SHARED_UI", "") or "") not in ("", "0"): import torch if not torch.cuda.is_available(): gr.HTML(f"""
▲ Automatic1111's Stable Diffusion WebUI + Mikubill's ControlNet WebUI extension | Running on Hugging Face | Loaded checkpoint: AtoZovyaRPGArtistTools15_sd15V1
▲ Docker build from 🐙 GitHub ➔ kalaspuff/stable-diffusion-webui-controlnet-docker / 🤗 Hugging Face ➔ carloscar/stable-diffusion-webui-controlnet-docker
▲ This Space is currently running on CPU, which may yield very slow results - you can upgrade for a GPU after duplicating the space.
▲ Duplicate this Space to run it privately without a queue, use a GPU for faster generation times, load custom checkpoints, etc.
▲ Automatic1111's Stable Diffusion WebUI + Mikubill's ControlNet WebUI extension | Running on Hugging Face | Loaded checkpoint: AtoZovyaRPGArtistTools15_sd15V1
▲ Docker build from 🐙 GitHub ➔ kalaspuff/stable-diffusion-webui-controlnet-docker / 🤗 Hugging Face ➔ carloscar/stable-diffusion-webui-controlnet-docker
▲ Duplicate this Space to run it privately without a queue, use extensions, load custom checkpoints, etc.
▲ Docker build from 🐙 GitHub ➔ kalaspuff/stable-diffusion-webui-controlnet-docker / 🤗 Hugging Face ➔ carloscar/stable-diffusion-webui-controlnet-docker
▲ Load additional checkpoints, VAE, LoRA models, etc. Read more on the README at the GitHub link above.
▲ This Space is currently running on CPU, which may yield very slow results - you can upgrade for a GPU in the Settings tab
▲ Docker build from 🐙 GitHub ➔ kalaspuff/stable-diffusion-webui-controlnet-docker / 🤗 Hugging Face ➔ carloscar/stable-diffusion-webui-controlnet-docker
▲ Load additional checkpoints, VAE, LoRA models, etc. Read more on the README at the GitHub link above.
▲ This Space has GPU enabled - remember to remove the GPU from the space in the Settings tab when you're done.