import gradio as gr import cv2 import torch import utils import datetime import time import psutil from imwatermark import WatermarkEncoder import numpy as np from PIL import Image from diffusers import EulerAncestralDiscreteScheduler, EulerDiscreteScheduler, AutoencoderKL, UNet2DConditionModel, StableDiffusionPipeline, StableDiffusionImg2ImgPipeline start_time = time.time() is_colab = utils.is_google_colab() #wm = "SDV2" #wm_encoder = WatermarkEncoder() #wm_encoder.set_watermark('bytes', wm.encode('utf-8')) #def put_watermark(img, wm_encoder=None): # if wm_encoder is not None: # img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR) # img = wm_encoder.encode(img, 'dwtDct') # img = Image.fromarray(img[:, :, ::-1]) # return img class Model: def __init__(self, name, path="", prefix=""): self.name = name self.path = path self.prefix = prefix self.pipe_t2i = None self.pipe_i2i = None models = [ Model("Future Diffusion", "nitrosocke/Future-Diffusion", "future style") ] # Model("Ghibli Diffusion", "nitrosocke/Ghibli-Diffusion", "ghibli style"), # Model("Redshift Diffusion", "nitrosocke/Redshift-Diffusion", "redshift style"), # Model("Nitro Diffusion", "nitrosocke/Nitro-Diffusion", "archer arcane modern disney"), scheduler = EulerAncestralDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-2-base", subfolder="scheduler") #scheduler = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-2-base", subfolder="scheduler") custom_model = None if is_colab: models.insert(1, Model("Custom model")) custom_model = models[0] last_mode = "txt2img" current_model = models[0] if is_colab else models[0] current_model_path = current_model.path if is_colab: pipe = StableDiffusionPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler) else: # download all models print(f"{datetime.datetime.now()} Downloading vae...") pipe = StableDiffusionPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler) #vae = AutoencoderKL.from_pretrained(current_model.path, subfolder="vae", torch_dtype=torch.float16) for model in models: try: print(f"{datetime.datetime.now()} Downloading {model.name} model...") unet = UNet2DConditionModel.from_pretrained(model.path, subfolder="unet", torch_dtype=torch.float16) model.pipe_t2i = StableDiffusionPipeline.from_pretrained(model.path, unet=unet, torch_dtype=torch.float16, scheduler=scheduler) model.pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(model.path, unet=unet, torch_dtype=torch.float16, scheduler=scheduler) except Exception as e: print(f"{datetime.datetime.now()} Failed to load model " + model.name + ": " + str(e)) models.remove(model) pipe = models[0].pipe_t2i if torch.cuda.is_available(): pipe = pipe.to("cuda") device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶" def error_str(error, title="Error"): return f"""#### {title} {error}""" if error else "" def custom_model_changed(path): models[0].path = path global current_model current_model = models[0] def on_model_change(model_name): prefix = "Enter prompt. \"" + next((m.prefix for m in models if m.name == model_name), None) + "\" is prefixed automatically" if model_name != models[0].name else "Don't forget to use the custom model prefix in the prompt!" return gr.update(visible = model_name == models[0].name), gr.update(placeholder=prefix) def inference(model_name, prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt=""): print(psutil.virtual_memory()) # print memory usage global current_model for model in models: if model.name == model_name: current_model = model model_path = current_model.path generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None try: if img is not None: return img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None else: return txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator), None except Exception as e: return None, error_str(e) def txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator): print(f"{datetime.datetime.now()} txt_to_img, model: {current_model.name}") global last_mode global pipe global current_model_path if model_path != current_model_path or last_mode != "txt2img": current_model_path = model_path if is_colab or current_model == custom_model: pipe = StableDiffusionPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler) else: pipe = pipe.to("cpu") pipe = current_model.pipe_t2i if torch.cuda.is_available(): pipe = pipe.to("cuda") last_mode = "txt2img" prompt = f"{current_model.prefix} {prompt}" results = pipe( prompt, negative_prompt = neg_prompt, # num_images_per_prompt=n_images, num_inference_steps = int(steps), guidance_scale = guidance, width = width, height = height, generator = generator) return results.images[0] def img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): print(f"{datetime.datetime.now()} img_to_img, model: {model_path}") global last_mode global pipe global current_model_path if model_path != current_model_path or last_mode != "img2img": current_model_path = model_path if is_colab or current_model == custom_model: pipe = StableDiffusionPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler) else: pipe = pipe.to("cpu") pipe = current_model.pipe_i2i if torch.cuda.is_available(): pipe = pipe.to("cuda") last_mode = "img2img" prompt = f"{current_model.prefix} {prompt}" ratio = min(height / img.height, width / img.width) img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) results = pipe( prompt, negative_prompt = neg_prompt, # num_images_per_prompt=n_images, init_image = img, num_inference_steps = int(steps), strength = strength, guidance_scale = guidance, width = width, height = height, generator = generator) return results.images[0] def replace_nsfw_images(results): if is_colab: return results.images[0] for i in range(len(results.images)): if results.nsfw_content_detected[i]: results.images[i] = Image.open("nsfw.png") return results.images[0] css = """.finetuned-diffusion-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.finetuned-diffusion-div div h1{font-weight:900;margin-bottom:7px}.finetuned-diffusion-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} """ with gr.Blocks(css=css) as demo: gr.HTML( f"""

Diffusion Space

Demo for Nitrosocke's fine-tuned models.

You can skip the queue and load custom models in the colab: Open In Colab

You can also duplicate this space and upgrade to gpu by going to settings: Duplicate Space

""" ) with gr.Row(): with gr.Column(scale=55): with gr.Group(): model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name) with gr.Box(visible=False) as custom_model_group: custom_model_path = gr.Textbox(label="Custom model path", placeholder="nitrosocke/Future-Diffusion", interactive=False) gr.HTML("
Custom models have to be downloaded first, so give it some time.
") with gr.Row(): prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder="Enter prompt. Style applied automatically").style(container=False) generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) image_out = gr.Image(height=512) # gallery = gr.Gallery( # label="Generated images", show_label=False, elem_id="gallery" # ).style(grid=[1], height="auto") error_output = gr.Markdown() with gr.Column(scale=45): with gr.Tab("Options"): with gr.Group(): neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") # n_images = gr.Slider(label="Images", value=1, minimum=1, maximum=4, step=1) with gr.Row(): guidance = gr.Slider(label="Guidance scale", value=7, maximum=15, step=1) steps = gr.Slider(label="Steps", value=20, minimum=2, maximum=30, step=1) with gr.Row(): width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=64) height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=64) seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) with gr.Tab("Image to image"): with gr.Group(): image = gr.Image(label="Image", height=256, tool="editor", type="pil") strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) if is_colab: model_name.change(on_model_change, inputs=model_name, outputs=[custom_model_group, prompt], queue=False) custom_model_path.change(custom_model_changed, inputs=custom_model_path, outputs=None) # n_images.change(lambda n: gr.Gallery().style(grid=[2 if n > 1 else 1], height="auto"), inputs=n_images, outputs=gallery) inputs = [model_name, prompt, guidance, steps, width, height, seed, image, strength, neg_prompt] outputs = [image_out, error_output] prompt.submit(inference, inputs=inputs, outputs=outputs) generate.click(inference, inputs=inputs, outputs=outputs) ex = gr.Examples([ [models[0].name, "city scene at night intricate street level", "blurry fog soft", 7, 20], [models[0].name, "beautiful female cyborg sitting in a cafe close up", "bad anatomy bad eyes blurry soft", 7, 20], [models[0].name, "cyborg dog neon eyes", "extra mouth extra legs blurry soft bloom bad anatomy", 7, 20], ], inputs=[model_name, prompt, neg_prompt, guidance, steps, seed], outputs=outputs, fn=inference, cache_examples=False) gr.HTML("""

Model by Nitrosocke.

""") print(f"Space built in {time.time() - start_time:.2f} seconds") if not is_colab: demo.queue(concurrency_count=1) demo.launch(debug=is_colab, share=is_colab)