Datasets:

ArXiv:
diffusers-bot commited on
Commit
ae21657
1 Parent(s): 1427a0e

Upload folder using huggingface_hub

Browse files
Files changed (41) hide show
  1. v0.19.2/README.md +1769 -0
  2. v0.19.2/bit_diffusion.py +264 -0
  3. v0.19.2/checkpoint_merger.py +286 -0
  4. v0.19.2/clip_guided_images_mixing_stable_diffusion.py +456 -0
  5. v0.19.2/clip_guided_stable_diffusion.py +347 -0
  6. v0.19.2/clip_guided_stable_diffusion_img2img.py +496 -0
  7. v0.19.2/composable_stable_diffusion.py +580 -0
  8. v0.19.2/ddim_noise_comparative_analysis.py +190 -0
  9. v0.19.2/edict_pipeline.py +264 -0
  10. v0.19.2/iadb.py +149 -0
  11. v0.19.2/imagic_stable_diffusion.py +496 -0
  12. v0.19.2/img2img_inpainting.py +463 -0
  13. v0.19.2/interpolate_stable_diffusion.py +524 -0
  14. v0.19.2/lpw_stable_diffusion.py +1470 -0
  15. v0.19.2/lpw_stable_diffusion_onnx.py +1146 -0
  16. v0.19.2/magic_mix.py +152 -0
  17. v0.19.2/mixture_canvas.py +503 -0
  18. v0.19.2/mixture_tiling.py +405 -0
  19. v0.19.2/multilingual_stable_diffusion.py +436 -0
  20. v0.19.2/one_step_unet.py +24 -0
  21. v0.19.2/sd_text2img_k_diffusion.py +475 -0
  22. v0.19.2/seed_resize_stable_diffusion.py +366 -0
  23. v0.19.2/speech_to_image_diffusion.py +261 -0
  24. v0.19.2/stable_diffusion_comparison.py +405 -0
  25. v0.19.2/stable_diffusion_controlnet_img2img.py +989 -0
  26. v0.19.2/stable_diffusion_controlnet_inpaint.py +1138 -0
  27. v0.19.2/stable_diffusion_controlnet_inpaint_img2img.py +1119 -0
  28. v0.19.2/stable_diffusion_controlnet_reference.py +834 -0
  29. v0.19.2/stable_diffusion_ipex.py +848 -0
  30. v0.19.2/stable_diffusion_mega.py +227 -0
  31. v0.19.2/stable_diffusion_reference.py +796 -0
  32. v0.19.2/stable_diffusion_repaint.py +956 -0
  33. v0.19.2/stable_diffusion_tensorrt_img2img.py +1055 -0
  34. v0.19.2/stable_diffusion_tensorrt_inpaint.py +1088 -0
  35. v0.19.2/stable_diffusion_tensorrt_txt2img.py +928 -0
  36. v0.19.2/stable_unclip.py +287 -0
  37. v0.19.2/text_inpainting.py +302 -0
  38. v0.19.2/tiled_upscaling.py +298 -0
  39. v0.19.2/unclip_image_interpolation.py +495 -0
  40. v0.19.2/unclip_text_interpolation.py +573 -0
  41. v0.19.2/wildcard_stable_diffusion.py +418 -0
v0.19.2/README.md ADDED
@@ -0,0 +1,1769 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Community Examples
2
+
3
+ > **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).**
4
+
5
+ **Community** examples consist of both inference and training examples that have been added by the community.
6
+ Please have a look at the following table to get an overview of all community examples. Click on the **Code Example** to get a copy-and-paste ready code example that you can try out.
7
+ If a community doesn't work as expected, please open an issue and ping the author on it.
8
+
9
+ | Example | Description | Code Example | Colab | Author |
10
+ |:--------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------:|
11
+ | CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion | [CLIP Guided Stable Diffusion](#clip-guided-stable-diffusion) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) | [Suraj Patil](https://github.com/patil-suraj/) |
12
+ | One Step U-Net (Dummy) | Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) | [One Step U-Net](#one-step-unet) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
13
+ | Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Stable Diffusion Interpolation](#stable-diffusion-interpolation) | - | [Nate Raw](https://github.com/nateraw/) |
14
+ | Stable Diffusion Mega | **One** Stable Diffusion Pipeline with all functionalities of [Text2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py), [Image2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) and [Inpainting](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | [Stable Diffusion Mega](#stable-diffusion-mega) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
15
+ | Long Prompt Weighting Stable Diffusion | **One** Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. | [Long Prompt Weighting Stable Diffusion](#long-prompt-weighting-stable-diffusion) | - | [SkyTNT](https://github.com/SkyTNT) |
16
+ | Speech to Image | Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images | [Speech to Image](#speech-to-image) | - | [Mikail Duzenli](https://github.com/MikailINTech)
17
+ | Wild Card Stable Diffusion | Stable Diffusion Pipeline that supports prompts that contain wildcard terms (indicated by surrounding double underscores), with values instantiated randomly from a corresponding txt file or a dictionary of possible values | [Wildcard Stable Diffusion](#wildcard-stable-diffusion) | - | [Shyam Sudhakaran](https://github.com/shyamsn97) |
18
+ | [Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) | Stable Diffusion Pipeline that supports prompts that contain "|" in prompts (as an AND condition) and weights (separated by "|" as well) to positively / negatively weight prompts. | [Composable Stable Diffusion](#composable-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
19
+ | Seed Resizing Stable Diffusion | Stable Diffusion Pipeline that supports resizing an image and retaining the concepts of the 512 by 512 generation. | [Seed Resizing](#seed-resizing) | - | [Mark Rich](https://github.com/MarkRich) |
20
+ | Imagic Stable Diffusion | Stable Diffusion Pipeline that enables writing a text prompt to edit an existing image | [Imagic Stable Diffusion](#imagic-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
21
+ | Multilingual Stable Diffusion | Stable Diffusion Pipeline that supports prompts in 50 different languages. | [Multilingual Stable Diffusion](#multilingual-stable-diffusion-pipeline) | - | [Juan Carlos Piñeros](https://github.com/juancopi81) |
22
+ | Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting | [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) |
23
+ | Text Based Inpainting Stable Diffusion | Stable Diffusion Inpainting Pipeline that enables passing a text prompt to generate the mask for inpainting | [Text Based Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Dhruv Karan](https://github.com/unography) |
24
+ | Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - | [Stuti R.](https://github.com/kingstut) |
25
+ | K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
26
+ | Checkpoint Merger Pipeline | Diffusion Pipeline that enables merging of saved model checkpoints | [Checkpoint Merger Pipeline](#checkpoint-merger-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
27
+ Stable Diffusion v1.1-1.4 Comparison | Run all 4 model checkpoints for Stable Diffusion and compare their results together | [Stable Diffusion Comparison](#stable-diffusion-comparisons) | - | [Suvaditya Mukherjee](https://github.com/suvadityamuk) |
28
+ MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt | [MagicMix](#magic-mix) | - | [Partho Das](https://github.com/daspartho) |
29
+ | Stable UnCLIP | Diffusion Pipeline for combining prior model (generate clip image embedding from text, UnCLIPPipeline `"kakaobrain/karlo-v1-alpha"`) and decoder pipeline (decode clip image embedding to image, StableDiffusionImageVariationPipeline `"lambdalabs/sd-image-variations-diffusers"` ). | [Stable UnCLIP](#stable-unclip) | - | [Ray Wang](https://wrong.wang) |
30
+ | UnCLIP Text Interpolation Pipeline | Diffusion Pipeline that allows passing two prompts and produces images while interpolating between the text-embeddings of the two prompts | [UnCLIP Text Interpolation Pipeline](#unclip-text-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
31
+ | UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
32
+ | DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | - | [Aengus (Duc-Anh)](https://github.com/aengusng8) |
33
+ | CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
34
+ | TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
35
+ | EDICT Image Editing Pipeline | Diffusion pipeline for text-guided image editing | [EDICT Image Editing Pipeline](#edict-image-editing-pipeline) | - | [Joqsan Azocar](https://github.com/Joqsan) |
36
+ | Stable Diffusion RePaint | Stable Diffusion pipeline using [RePaint](https://arxiv.org/abs/2201.0986) for inpainting. | [Stable Diffusion RePaint](#stable-diffusion-repaint ) | - | [Markus Pobitzer](https://github.com/Markus-Pobitzer) |
37
+ | TensorRT Stable Diffusion Image to Image Pipeline | Accelerates the Stable Diffusion Image2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Image to Image Pipeline](#tensorrt-image2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
38
+ | Stable Diffusion IPEX Pipeline | Accelerate Stable Diffusion inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [Stable Diffusion on IPEX](#stable-diffusion-on-ipex) | - | [Yingjie Han](https://github.com/yingjie-han/) |
39
+ | CLIP Guided Images Mixing Stable Diffusion Pipeline | Сombine images using usual diffusion models. | [CLIP Guided Images Mixing Using Stable Diffusion](#clip-guided-images-mixing-with-stable-diffusion) | - | [Karachev Denis](https://github.com/TheDenk) |
40
+ | TensorRT Stable Diffusion Inpainting Pipeline | Accelerates the Stable Diffusion Inpainting Pipeline using TensorRT | [TensorRT Stable Diffusion Inpainting Pipeline](#tensorrt-inpainting-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
41
+ | IADB Pipeline | Implementation of [Iterative α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) | [IADB Pipeline](#iadb-pipeline) | - | [Thomas Chambon](https://github.com/tchambon)
42
+
43
+ To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
44
+ ```py
45
+ pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="filename_in_the_community_folder")
46
+ ```
47
+
48
+ ## Example usages
49
+
50
+ ### CLIP Guided Stable Diffusion
51
+
52
+ CLIP guided stable diffusion can help to generate more realistic images
53
+ by guiding stable diffusion at every denoising step with an additional CLIP model.
54
+
55
+ The following code requires roughly 12GB of GPU RAM.
56
+
57
+ ```python
58
+ from diffusers import DiffusionPipeline
59
+ from transformers import CLIPImageProcessor, CLIPModel
60
+ import torch
61
+
62
+
63
+ feature_extractor = CLIPImageProcessor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K")
64
+ clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16)
65
+
66
+
67
+ guided_pipeline = DiffusionPipeline.from_pretrained(
68
+ "runwayml/stable-diffusion-v1-5",
69
+ custom_pipeline="clip_guided_stable_diffusion",
70
+ clip_model=clip_model,
71
+ feature_extractor=feature_extractor,
72
+
73
+ torch_dtype=torch.float16,
74
+ )
75
+ guided_pipeline.enable_attention_slicing()
76
+ guided_pipeline = guided_pipeline.to("cuda")
77
+
78
+ prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
79
+
80
+ generator = torch.Generator(device="cuda").manual_seed(0)
81
+ images = []
82
+ for i in range(4):
83
+ image = guided_pipeline(
84
+ prompt,
85
+ num_inference_steps=50,
86
+ guidance_scale=7.5,
87
+ clip_guidance_scale=100,
88
+ num_cutouts=4,
89
+ use_cutouts=False,
90
+ generator=generator,
91
+ ).images[0]
92
+ images.append(image)
93
+
94
+ # save images locally
95
+ for i, img in enumerate(images):
96
+ img.save(f"./clip_guided_sd/image_{i}.png")
97
+ ```
98
+
99
+ The `images` list contains a list of PIL images that can be saved locally or displayed directly in a google colab.
100
+ Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images:
101
+
102
+ ![clip_guidance](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/clip_guidance/merged_clip_guidance.jpg).
103
+
104
+ ### One Step Unet
105
+
106
+ The dummy "one-step-unet" can be run as follows:
107
+
108
+ ```python
109
+ from diffusers import DiffusionPipeline
110
+
111
+ pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet")
112
+ pipe()
113
+ ```
114
+
115
+ **Note**: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see https://github.com/huggingface/diffusers/issues/841).
116
+
117
+ ### Stable Diffusion Interpolation
118
+
119
+ The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes.
120
+
121
+ ```python
122
+ from diffusers import DiffusionPipeline
123
+ import torch
124
+
125
+ pipe = DiffusionPipeline.from_pretrained(
126
+ "CompVis/stable-diffusion-v1-4",
127
+ revision='fp16',
128
+ torch_dtype=torch.float16,
129
+ safety_checker=None, # Very important for videos...lots of false positives while interpolating
130
+ custom_pipeline="interpolate_stable_diffusion",
131
+ ).to('cuda')
132
+ pipe.enable_attention_slicing()
133
+
134
+ frame_filepaths = pipe.walk(
135
+ prompts=['a dog', 'a cat', 'a horse'],
136
+ seeds=[42, 1337, 1234],
137
+ num_interpolation_steps=16,
138
+ output_dir='./dreams',
139
+ batch_size=4,
140
+ height=512,
141
+ width=512,
142
+ guidance_scale=8.5,
143
+ num_inference_steps=50,
144
+ )
145
+ ```
146
+
147
+ The output of the `walk(...)` function returns a list of images saved under the folder as defined in `output_dir`. You can use these images to create videos of stable diffusion.
148
+
149
+ > **Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality.**
150
+
151
+ ### Stable Diffusion Mega
152
+
153
+ The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class.
154
+
155
+ ```python
156
+ #!/usr/bin/env python3
157
+ from diffusers import DiffusionPipeline
158
+ import PIL
159
+ import requests
160
+ from io import BytesIO
161
+ import torch
162
+
163
+
164
+ def download_image(url):
165
+ response = requests.get(url)
166
+ return PIL.Image.open(BytesIO(response.content)).convert("RGB")
167
+
168
+ pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_mega", torch_dtype=torch.float16, revision="fp16")
169
+ pipe.to("cuda")
170
+ pipe.enable_attention_slicing()
171
+
172
+
173
+ ### Text-to-Image
174
+
175
+ images = pipe.text2img("An astronaut riding a horse").images
176
+
177
+ ### Image-to-Image
178
+
179
+ init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
180
+
181
+ prompt = "A fantasy landscape, trending on artstation"
182
+
183
+ images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
184
+
185
+ ### Inpainting
186
+
187
+ img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
188
+ mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
189
+ init_image = download_image(img_url).resize((512, 512))
190
+ mask_image = download_image(mask_url).resize((512, 512))
191
+
192
+ prompt = "a cat sitting on a bench"
193
+ images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images
194
+ ```
195
+
196
+ As shown above this one pipeline can run all both "text-to-image", "image-to-image", and "inpainting" in one pipeline.
197
+
198
+ ### Long Prompt Weighting Stable Diffusion
199
+ Features of this custom pipeline:
200
+ - Input a prompt without the 77 token length limit.
201
+ - Includes tx2img, img2img. and inpainting pipelines.
202
+ - Emphasize/weigh part of your prompt with parentheses as so: `a baby deer with (big eyes)`
203
+ - De-emphasize part of your prompt as so: `a [baby] deer with big eyes`
204
+ - Precisely weigh part of your prompt as so: `a baby deer with (big eyes:1.3)`
205
+
206
+ Prompt weighting equivalents:
207
+ - `a baby deer with` == `(a baby deer with:1.0)`
208
+ - `(big eyes)` == `(big eyes:1.1)`
209
+ - `((big eyes))` == `(big eyes:1.21)`
210
+ - `[big eyes]` == `(big eyes:0.91)`
211
+
212
+ You can run this custom pipeline as so:
213
+
214
+ #### pytorch
215
+
216
+ ```python
217
+ from diffusers import DiffusionPipeline
218
+ import torch
219
+
220
+ pipe = DiffusionPipeline.from_pretrained(
221
+ 'hakurei/waifu-diffusion',
222
+ custom_pipeline="lpw_stable_diffusion",
223
+
224
+ torch_dtype=torch.float16
225
+ )
226
+ pipe=pipe.to("cuda")
227
+
228
+ prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms"
229
+ neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry"
230
+
231
+ pipe.text2img(prompt, negative_prompt=neg_prompt, width=512,height=512,max_embeddings_multiples=3).images[0]
232
+
233
+ ```
234
+
235
+ #### onnxruntime
236
+
237
+ ```python
238
+ from diffusers import DiffusionPipeline
239
+ import torch
240
+
241
+ pipe = DiffusionPipeline.from_pretrained(
242
+ 'CompVis/stable-diffusion-v1-4',
243
+ custom_pipeline="lpw_stable_diffusion_onnx",
244
+ revision="onnx",
245
+ provider="CUDAExecutionProvider"
246
+ )
247
+
248
+ prompt = "a photo of an astronaut riding a horse on mars, best quality"
249
+ neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
250
+
251
+ pipe.text2img(prompt,negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
252
+
253
+ ```
254
+
255
+ if you see `Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors`. Do not worry, it is normal.
256
+
257
+ ### Speech to Image
258
+
259
+ The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion.
260
+
261
+ ```Python
262
+ import torch
263
+
264
+ import matplotlib.pyplot as plt
265
+ from datasets import load_dataset
266
+ from diffusers import DiffusionPipeline
267
+ from transformers import (
268
+ WhisperForConditionalGeneration,
269
+ WhisperProcessor,
270
+ )
271
+
272
+
273
+ device = "cuda" if torch.cuda.is_available() else "cpu"
274
+
275
+ ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
276
+
277
+ audio_sample = ds[3]
278
+
279
+ text = audio_sample["text"].lower()
280
+ speech_data = audio_sample["audio"]["array"]
281
+
282
+ model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device)
283
+ processor = WhisperProcessor.from_pretrained("openai/whisper-small")
284
+
285
+ diffuser_pipeline = DiffusionPipeline.from_pretrained(
286
+ "CompVis/stable-diffusion-v1-4",
287
+ custom_pipeline="speech_to_image_diffusion",
288
+ speech_model=model,
289
+ speech_processor=processor,
290
+
291
+ torch_dtype=torch.float16,
292
+ )
293
+
294
+ diffuser_pipeline.enable_attention_slicing()
295
+ diffuser_pipeline = diffuser_pipeline.to(device)
296
+
297
+ output = diffuser_pipeline(speech_data)
298
+ plt.imshow(output.images[0])
299
+ ```
300
+ This example produces the following image:
301
+
302
+ ![image](https://user-images.githubusercontent.com/45072645/196901736-77d9c6fc-63ee-4072-90b0-dc8b903d63e3.png)
303
+
304
+ ### Wildcard Stable Diffusion
305
+ Following the great examples from https://github.com/jtkelm2/stable-diffusion-webui-1/blob/master/scripts/wildcards.py and https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts#wildcards, here's a minimal implementation that allows for users to add "wildcards", denoted by `__wildcard__` to prompts that are used as placeholders for randomly sampled values given by either a dictionary or a `.txt` file. For example:
306
+
307
+ Say we have a prompt:
308
+
309
+ ```
310
+ prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
311
+ ```
312
+
313
+ We can then define possible values to be sampled for `animal`, `object`, and `clothing`. These can either be from a `.txt` with the same name as the category.
314
+
315
+ The possible values can also be defined / combined by using a dictionary like: `{"animal":["dog", "cat", mouse"]}`.
316
+
317
+ The actual pipeline works just like `StableDiffusionPipeline`, except the `__call__` method takes in:
318
+
319
+ `wildcard_files`: list of file paths for wild card replacement
320
+ `wildcard_option_dict`: dict with key as `wildcard` and values as a list of possible replacements
321
+ `num_prompt_samples`: number of prompts to sample, uniformly sampling wildcards
322
+
323
+ A full example:
324
+
325
+ create `animal.txt`, with contents like:
326
+
327
+ ```
328
+ dog
329
+ cat
330
+ mouse
331
+ ```
332
+
333
+ create `object.txt`, with contents like:
334
+
335
+ ```
336
+ chair
337
+ sofa
338
+ bench
339
+ ```
340
+
341
+ ```python
342
+ from diffusers import DiffusionPipeline
343
+ import torch
344
+
345
+ pipe = DiffusionPipeline.from_pretrained(
346
+ "CompVis/stable-diffusion-v1-4",
347
+ custom_pipeline="wildcard_stable_diffusion",
348
+
349
+ torch_dtype=torch.float16,
350
+ )
351
+ prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
352
+ out = pipe(
353
+ prompt,
354
+ wildcard_option_dict={
355
+ "clothing":["hat", "shirt", "scarf", "beret"]
356
+ },
357
+ wildcard_files=["object.txt", "animal.txt"],
358
+ num_prompt_samples=1
359
+ )
360
+ ```
361
+
362
+ ### Composable Stable diffusion
363
+
364
+ [Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) proposes conjunction and negation (negative prompts) operators for compositional generation with conditional diffusion models.
365
+
366
+ ```python
367
+ import torch as th
368
+ import numpy as np
369
+ import torchvision.utils as tvu
370
+
371
+ from diffusers import DiffusionPipeline
372
+
373
+ import argparse
374
+
375
+ parser = argparse.ArgumentParser()
376
+ parser.add_argument("--prompt", type=str, default="mystical trees | A magical pond | dark",
377
+ help="use '|' as the delimiter to compose separate sentences.")
378
+ parser.add_argument("--steps", type=int, default=50)
379
+ parser.add_argument("--scale", type=float, default=7.5)
380
+ parser.add_argument("--weights", type=str, default="7.5 | 7.5 | -7.5")
381
+ parser.add_argument("--seed", type=int, default=2)
382
+ parser.add_argument("--model_path", type=str, default="CompVis/stable-diffusion-v1-4")
383
+ parser.add_argument("--num_images", type=int, default=1)
384
+ args = parser.parse_args()
385
+
386
+ has_cuda = th.cuda.is_available()
387
+ device = th.device('cpu' if not has_cuda else 'cuda')
388
+
389
+ prompt = args.prompt
390
+ scale = args.scale
391
+ steps = args.steps
392
+
393
+ pipe = DiffusionPipeline.from_pretrained(
394
+ args.model_path,
395
+ custom_pipeline="composable_stable_diffusion",
396
+ ).to(device)
397
+
398
+ pipe.safety_checker = None
399
+
400
+ images = []
401
+ generator = th.Generator("cuda").manual_seed(args.seed)
402
+ for i in range(args.num_images):
403
+ image = pipe(prompt, guidance_scale=scale, num_inference_steps=steps,
404
+ weights=args.weights, generator=generator).images[0]
405
+ images.append(th.from_numpy(np.array(image)).permute(2, 0, 1) / 255.)
406
+ grid = tvu.make_grid(th.stack(images, dim=0), nrow=4, padding=0)
407
+ tvu.save_image(grid, f'{prompt}_{args.weights}' + '.png')
408
+
409
+ ```
410
+
411
+ ### Imagic Stable Diffusion
412
+ Allows you to edit an image using stable diffusion.
413
+
414
+ ```python
415
+ import requests
416
+ from PIL import Image
417
+ from io import BytesIO
418
+ import torch
419
+ import os
420
+ from diffusers import DiffusionPipeline, DDIMScheduler
421
+ has_cuda = torch.cuda.is_available()
422
+ device = torch.device('cpu' if not has_cuda else 'cuda')
423
+ pipe = DiffusionPipeline.from_pretrained(
424
+ "CompVis/stable-diffusion-v1-4",
425
+ safety_checker=None,
426
+ use_auth_token=True,
427
+ custom_pipeline="imagic_stable_diffusion",
428
+ scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
429
+ ).to(device)
430
+ generator = torch.Generator("cuda").manual_seed(0)
431
+ seed = 0
432
+ prompt = "A photo of Barack Obama smiling with a big grin"
433
+ url = 'https://www.dropbox.com/s/6tlwzr73jd1r9yk/obama.png?dl=1'
434
+ response = requests.get(url)
435
+ init_image = Image.open(BytesIO(response.content)).convert("RGB")
436
+ init_image = init_image.resize((512, 512))
437
+ res = pipe.train(
438
+ prompt,
439
+ image=init_image,
440
+ generator=generator)
441
+ res = pipe(alpha=1, guidance_scale=7.5, num_inference_steps=50)
442
+ os.makedirs("imagic", exist_ok=True)
443
+ image = res.images[0]
444
+ image.save('./imagic/imagic_image_alpha_1.png')
445
+ res = pipe(alpha=1.5, guidance_scale=7.5, num_inference_steps=50)
446
+ image = res.images[0]
447
+ image.save('./imagic/imagic_image_alpha_1_5.png')
448
+ res = pipe(alpha=2, guidance_scale=7.5, num_inference_steps=50)
449
+ image = res.images[0]
450
+ image.save('./imagic/imagic_image_alpha_2.png')
451
+ ```
452
+
453
+ ### Seed Resizing
454
+ Test seed resizing. Originally generate an image in 512 by 512, then generate image with same seed at 512 by 592 using seed resizing. Finally, generate 512 by 592 using original stable diffusion pipeline.
455
+
456
+ ```python
457
+ import torch as th
458
+ import numpy as np
459
+ from diffusers import DiffusionPipeline
460
+
461
+ has_cuda = th.cuda.is_available()
462
+ device = th.device('cpu' if not has_cuda else 'cuda')
463
+
464
+ pipe = DiffusionPipeline.from_pretrained(
465
+ "CompVis/stable-diffusion-v1-4",
466
+ use_auth_token=True,
467
+ custom_pipeline="seed_resize_stable_diffusion"
468
+ ).to(device)
469
+
470
+ def dummy(images, **kwargs):
471
+ return images, False
472
+
473
+ pipe.safety_checker = dummy
474
+
475
+
476
+ images = []
477
+ th.manual_seed(0)
478
+ generator = th.Generator("cuda").manual_seed(0)
479
+
480
+ seed = 0
481
+ prompt = "A painting of a futuristic cop"
482
+
483
+ width = 512
484
+ height = 512
485
+
486
+ res = pipe(
487
+ prompt,
488
+ guidance_scale=7.5,
489
+ num_inference_steps=50,
490
+ height=height,
491
+ width=width,
492
+ generator=generator)
493
+ image = res.images[0]
494
+ image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
495
+
496
+
497
+ th.manual_seed(0)
498
+ generator = th.Generator("cuda").manual_seed(0)
499
+
500
+ pipe = DiffusionPipeline.from_pretrained(
501
+ "CompVis/stable-diffusion-v1-4",
502
+ use_auth_token=True,
503
+ custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
504
+ ).to(device)
505
+
506
+ width = 512
507
+ height = 592
508
+
509
+ res = pipe(
510
+ prompt,
511
+ guidance_scale=7.5,
512
+ num_inference_steps=50,
513
+ height=height,
514
+ width=width,
515
+ generator=generator)
516
+ image = res.images[0]
517
+ image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
518
+
519
+ pipe_compare = DiffusionPipeline.from_pretrained(
520
+ "CompVis/stable-diffusion-v1-4",
521
+ use_auth_token=True,
522
+ custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
523
+ ).to(device)
524
+
525
+ res = pipe_compare(
526
+ prompt,
527
+ guidance_scale=7.5,
528
+ num_inference_steps=50,
529
+ height=height,
530
+ width=width,
531
+ generator=generator
532
+ )
533
+
534
+ image = res.images[0]
535
+ image.save('./seed_resize/seed_resize_{w}_{h}_image_compare.png'.format(w=width, h=height))
536
+ ```
537
+
538
+ ### Multilingual Stable Diffusion Pipeline
539
+
540
+ The following code can generate an images from texts in different languages using the pre-trained [mBART-50 many-to-one multilingual machine translation model](https://huggingface.co/facebook/mbart-large-50-many-to-one-mmt) and Stable Diffusion.
541
+
542
+ ```python
543
+ from PIL import Image
544
+
545
+ import torch
546
+
547
+ from diffusers import DiffusionPipeline
548
+ from transformers import (
549
+ pipeline,
550
+ MBart50TokenizerFast,
551
+ MBartForConditionalGeneration,
552
+ )
553
+ device = "cuda" if torch.cuda.is_available() else "cpu"
554
+ device_dict = {"cuda": 0, "cpu": -1}
555
+
556
+ # helper function taken from: https://huggingface.co/blog/stable_diffusion
557
+ def image_grid(imgs, rows, cols):
558
+ assert len(imgs) == rows*cols
559
+
560
+ w, h = imgs[0].size
561
+ grid = Image.new('RGB', size=(cols*w, rows*h))
562
+ grid_w, grid_h = grid.size
563
+
564
+ for i, img in enumerate(imgs):
565
+ grid.paste(img, box=(i%cols*w, i//cols*h))
566
+ return grid
567
+
568
+ # Add language detection pipeline
569
+ language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection"
570
+ language_detection_pipeline = pipeline("text-classification",
571
+ model=language_detection_model_ckpt,
572
+ device=device_dict[device])
573
+
574
+ # Add model for language translation
575
+ trans_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
576
+ trans_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device)
577
+
578
+ diffuser_pipeline = DiffusionPipeline.from_pretrained(
579
+ "CompVis/stable-diffusion-v1-4",
580
+ custom_pipeline="multilingual_stable_diffusion",
581
+ detection_pipeline=language_detection_pipeline,
582
+ translation_model=trans_model,
583
+ translation_tokenizer=trans_tokenizer,
584
+
585
+ torch_dtype=torch.float16,
586
+ )
587
+
588
+ diffuser_pipeline.enable_attention_slicing()
589
+ diffuser_pipeline = diffuser_pipeline.to(device)
590
+
591
+ prompt = ["a photograph of an astronaut riding a horse",
592
+ "Una casa en la playa",
593
+ "Ein Hund, der Orange isst",
594
+ "Un restaurant parisien"]
595
+
596
+ output = diffuser_pipeline(prompt)
597
+
598
+ images = output.images
599
+
600
+ grid = image_grid(images, rows=2, cols=2)
601
+ ```
602
+
603
+ This example produces the following images:
604
+ ![image](https://user-images.githubusercontent.com/4313860/198328706-295824a4-9856-4ce5-8e66-278ceb42fd29.png)
605
+
606
+ ### Image to Image Inpainting Stable Diffusion
607
+
608
+ Similar to the standard stable diffusion inpainting example, except with the addition of an `inner_image` argument.
609
+
610
+ `image`, `inner_image`, and `mask` should have the same dimensions. `inner_image` should have an alpha (transparency) channel.
611
+
612
+ The aim is to overlay two images, then mask out the boundary between `image` and `inner_image` to allow stable diffusion to make the connection more seamless.
613
+ For example, this could be used to place a logo on a shirt and make it blend seamlessly.
614
+
615
+ ```python
616
+ import PIL
617
+ import torch
618
+
619
+ from diffusers import DiffusionPipeline
620
+
621
+ image_path = "./path-to-image.png"
622
+ inner_image_path = "./path-to-inner-image.png"
623
+ mask_path = "./path-to-mask.png"
624
+
625
+ init_image = PIL.Image.open(image_path).convert("RGB").resize((512, 512))
626
+ inner_image = PIL.Image.open(inner_image_path).convert("RGBA").resize((512, 512))
627
+ mask_image = PIL.Image.open(mask_path).convert("RGB").resize((512, 512))
628
+
629
+ pipe = DiffusionPipeline.from_pretrained(
630
+ "runwayml/stable-diffusion-inpainting",
631
+ custom_pipeline="img2img_inpainting",
632
+
633
+ torch_dtype=torch.float16
634
+ )
635
+ pipe = pipe.to("cuda")
636
+
637
+ prompt = "Your prompt here!"
638
+ image = pipe(prompt=prompt, image=init_image, inner_image=inner_image, mask_image=mask_image).images[0]
639
+ ```
640
+
641
+ ![2 by 2 grid demonstrating image to image inpainting.](https://user-images.githubusercontent.com/44398246/203506577-ec303be4-887e-4ebd-a773-c83fcb3dd01a.png)
642
+
643
+ ### Text Based Inpainting Stable Diffusion
644
+
645
+ Use a text prompt to generate the mask for the area to be inpainted.
646
+ Currently uses the CLIPSeg model for mask generation, then calls the standard Stable Diffusion Inpainting pipeline to perform the inpainting.
647
+
648
+ ```python
649
+ from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation
650
+ from diffusers import DiffusionPipeline
651
+
652
+ from PIL import Image
653
+ import requests
654
+
655
+ processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
656
+ model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined")
657
+
658
+ pipe = DiffusionPipeline.from_pretrained(
659
+ "runwayml/stable-diffusion-inpainting",
660
+ custom_pipeline="text_inpainting",
661
+ segmentation_model=model,
662
+ segmentation_processor=processor
663
+ )
664
+ pipe = pipe.to("cuda")
665
+
666
+
667
+ url = "https://github.com/timojl/clipseg/blob/master/example_image.jpg?raw=true"
668
+ image = Image.open(requests.get(url, stream=True).raw).resize((512, 512))
669
+ text = "a glass" # will mask out this text
670
+ prompt = "a cup" # the masked out region will be replaced with this
671
+
672
+ image = pipe(image=image, text=text, prompt=prompt).images[0]
673
+ ```
674
+
675
+ ### Bit Diffusion
676
+ Based https://arxiv.org/abs/2208.04202, this is used for diffusion on discrete data - eg, discreate image data, DNA sequence data. An unconditional discreate image can be generated like this:
677
+
678
+ ```python
679
+ from diffusers import DiffusionPipeline
680
+ pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="bit_diffusion")
681
+ image = pipe().images[0]
682
+
683
+ ```
684
+
685
+ ### Stable Diffusion with K Diffusion
686
+
687
+ Make sure you have @crowsonkb's https://github.com/crowsonkb/k-diffusion installed:
688
+
689
+ ```
690
+ pip install k-diffusion
691
+ ```
692
+
693
+ You can use the community pipeline as follows:
694
+
695
+ ```python
696
+ from diffusers import DiffusionPipeline
697
+
698
+ pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion")
699
+ pipe = pipe.to("cuda")
700
+
701
+ prompt = "an astronaut riding a horse on mars"
702
+ pipe.set_scheduler("sample_heun")
703
+ generator = torch.Generator(device="cuda").manual_seed(seed)
704
+ image = pipe(prompt, generator=generator, num_inference_steps=20).images[0]
705
+
706
+ image.save("./astronaut_heun_k_diffusion.png")
707
+ ```
708
+
709
+ To make sure that K Diffusion and `diffusers` yield the same results:
710
+
711
+ **Diffusers**:
712
+ ```python
713
+ from diffusers import DiffusionPipeline, EulerDiscreteScheduler
714
+
715
+ seed = 33
716
+
717
+ pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
718
+ pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
719
+ pipe = pipe.to("cuda")
720
+
721
+ generator = torch.Generator(device="cuda").manual_seed(seed)
722
+ image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
723
+ ```
724
+
725
+ ![diffusers_euler](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/k_diffusion/astronaut_euler.png)
726
+
727
+ **K Diffusion**:
728
+ ```python
729
+ from diffusers import DiffusionPipeline, EulerDiscreteScheduler
730
+
731
+ seed = 33
732
+
733
+ pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion")
734
+ pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
735
+ pipe = pipe.to("cuda")
736
+
737
+ pipe.set_scheduler("sample_euler")
738
+ generator = torch.Generator(device="cuda").manual_seed(seed)
739
+ image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
740
+ ```
741
+
742
+ ![diffusers_euler](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/k_diffusion/astronaut_euler_k_diffusion.png)
743
+
744
+ ### Checkpoint Merger Pipeline
745
+ Based on the AUTOMATIC1111/webui for checkpoint merging. This is a custom pipeline that merges upto 3 pretrained model checkpoints as long as they are in the HuggingFace model_index.json format.
746
+
747
+ The checkpoint merging is currently memory intensive as it modifies the weights of a DiffusionPipeline object in place. Expect atleast 13GB RAM Usage on Kaggle GPU kernels and
748
+ on colab you might run out of the 12GB memory even while merging two checkpoints.
749
+
750
+ Usage:-
751
+ ```python
752
+ from diffusers import DiffusionPipeline
753
+
754
+ #Return a CheckpointMergerPipeline class that allows you to merge checkpoints.
755
+ #The checkpoint passed here is ignored. But still pass one of the checkpoints you plan to
756
+ #merge for convenience
757
+ pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="checkpoint_merger")
758
+
759
+ #There are multiple possible scenarios:
760
+ #The pipeline with the merged checkpoints is returned in all the scenarios
761
+
762
+ #Compatible checkpoints a.k.a matched model_index.json files. Ignores the meta attributes in model_index.json during comparision.( attrs with _ as prefix )
763
+ merged_pipe = pipe.merge(["CompVis/stable-diffusion-v1-4","CompVis/stable-diffusion-v1-2"], interp = "sigmoid", alpha = 0.4)
764
+
765
+ #Incompatible checkpoints in model_index.json but merge might be possible. Use force = True to ignore model_index.json compatibility
766
+ merged_pipe_1 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion"], force = True, interp = "sigmoid", alpha = 0.4)
767
+
768
+ #Three checkpoint merging. Only "add_difference" method actually works on all three checkpoints. Using any other options will ignore the 3rd checkpoint.
769
+ merged_pipe_2 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion","prompthero/openjourney"], force = True, interp = "add_difference", alpha = 0.4)
770
+
771
+ prompt = "An astronaut riding a horse on Mars"
772
+
773
+ image = merged_pipe(prompt).images[0]
774
+
775
+ ```
776
+ Some examples along with the merge details:
777
+
778
+ 1. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" ; Sigmoid interpolation; alpha = 0.8
779
+
780
+ ![Stable plus Waifu Sigmoid 0.8](https://huggingface.co/datasets/NagaSaiAbhinay/CheckpointMergerSamples/resolve/main/stability_v1_4_waifu_sig_0.8.png)
781
+
782
+ 2. "hakurei/waifu-diffusion" + "prompthero/openjourney" ; Inverse Sigmoid interpolation; alpha = 0.8
783
+
784
+ ![Stable plus Waifu Sigmoid 0.8](https://huggingface.co/datasets/NagaSaiAbhinay/CheckpointMergerSamples/resolve/main/waifu_openjourney_inv_sig_0.8.png)
785
+
786
+
787
+ 3. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" + "prompthero/openjourney"; Add Difference interpolation; alpha = 0.5
788
+
789
+ ![Stable plus Waifu plus openjourney add_diff 0.5](https://huggingface.co/datasets/NagaSaiAbhinay/CheckpointMergerSamples/resolve/main/stable_waifu_openjourney_add_diff_0.5.png)
790
+
791
+
792
+ ### Stable Diffusion Comparisons
793
+
794
+ This Community Pipeline enables the comparison between the 4 checkpoints that exist for Stable Diffusion. They can be found through the following links:
795
+ 1. [Stable Diffusion v1.1](https://huggingface.co/CompVis/stable-diffusion-v1-1)
796
+ 2. [Stable Diffusion v1.2](https://huggingface.co/CompVis/stable-diffusion-v1-2)
797
+ 3. [Stable Diffusion v1.3](https://huggingface.co/CompVis/stable-diffusion-v1-3)
798
+ 4. [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4)
799
+
800
+ ```python
801
+ from diffusers import DiffusionPipeline
802
+ import matplotlib.pyplot as plt
803
+
804
+ pipe = DiffusionPipeline.from_pretrained('CompVis/stable-diffusion-v1-4', custom_pipeline='suvadityamuk/StableDiffusionComparison')
805
+ pipe.enable_attention_slicing()
806
+ pipe = pipe.to('cuda')
807
+ prompt = "an astronaut riding a horse on mars"
808
+ output = pipe(prompt)
809
+
810
+ plt.subplots(2,2,1)
811
+ plt.imshow(output.images[0])
812
+ plt.title('Stable Diffusion v1.1')
813
+ plt.axis('off')
814
+ plt.subplots(2,2,2)
815
+ plt.imshow(output.images[1])
816
+ plt.title('Stable Diffusion v1.2')
817
+ plt.axis('off')
818
+ plt.subplots(2,2,3)
819
+ plt.imshow(output.images[2])
820
+ plt.title('Stable Diffusion v1.3')
821
+ plt.axis('off')
822
+ plt.subplots(2,2,4)
823
+ plt.imshow(output.images[3])
824
+ plt.title('Stable Diffusion v1.4')
825
+ plt.axis('off')
826
+
827
+ plt.show()
828
+ ```
829
+
830
+ As a result, you can look at a grid of all 4 generated images being shown together, that captures a difference the advancement of the training between the 4 checkpoints.
831
+
832
+ ### Magic Mix
833
+
834
+ Implementation of the [MagicMix: Semantic Mixing with Diffusion Models](https://arxiv.org/abs/2210.16056) paper. This is a Diffusion Pipeline for semantic mixing of an image and a text prompt to create a new concept while preserving the spatial layout and geometry of the subject in the image. The pipeline takes an image that provides the layout semantics and a prompt that provides the content semantics for the mixing process.
835
+
836
+ There are 3 parameters for the method-
837
+ - `mix_factor`: It is the interpolation constant used in the layout generation phase. The greater the value of `mix_factor`, the greater the influence of the prompt on the layout generation process.
838
+ - `kmax` and `kmin`: These determine the range for the layout and content generation process. A higher value of kmax results in loss of more information about the layout of the original image and a higher value of kmin results in more steps for content generation process.
839
+
840
+ Here is an example usage-
841
+
842
+ ```python
843
+ from diffusers import DiffusionPipeline, DDIMScheduler
844
+ from PIL import Image
845
+
846
+ pipe = DiffusionPipeline.from_pretrained(
847
+ "CompVis/stable-diffusion-v1-4",
848
+ custom_pipeline="magic_mix",
849
+ scheduler = DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"),
850
+ ).to('cuda')
851
+
852
+ img = Image.open('phone.jpg')
853
+ mix_img = pipe(
854
+ img,
855
+ prompt = 'bed',
856
+ kmin = 0.3,
857
+ kmax = 0.5,
858
+ mix_factor = 0.5,
859
+ )
860
+ mix_img.save('phone_bed_mix.jpg')
861
+ ```
862
+ The `mix_img` is a PIL image that can be saved locally or displayed directly in a google colab. Generated image is a mix of the layout semantics of the given image and the content semantics of the prompt.
863
+
864
+ E.g. the above script generates the following image:
865
+
866
+ `phone.jpg`
867
+
868
+ ![206903102-34e79b9f-9ed2-4fac-bb38-82871343c655](https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg)
869
+
870
+ `phone_bed_mix.jpg`
871
+
872
+ ![206903104-913a671d-ef53-4ae4-919d-64c3059c8f67](https://user-images.githubusercontent.com/59410571/209578602-70f323fa-05b7-4dd6-b055-e40683e37914.jpg)
873
+
874
+ For more example generations check out this [demo notebook](https://github.com/daspartho/MagicMix/blob/main/demo.ipynb).
875
+
876
+
877
+ ### Stable UnCLIP
878
+
879
+ UnCLIPPipeline("kakaobrain/karlo-v1-alpha") provide a prior model that can generate clip image embedding from text.
880
+ StableDiffusionImageVariationPipeline("lambdalabs/sd-image-variations-diffusers") provide a decoder model than can generate images from clip image embedding.
881
+
882
+ ```python
883
+ import torch
884
+ from diffusers import DiffusionPipeline
885
+
886
+ device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
887
+
888
+ pipeline = DiffusionPipeline.from_pretrained(
889
+ "kakaobrain/karlo-v1-alpha",
890
+ torch_dtype=torch.float16,
891
+ custom_pipeline="stable_unclip",
892
+ decoder_pipe_kwargs=dict(
893
+ image_encoder=None,
894
+ ),
895
+ )
896
+ pipeline.to(device)
897
+
898
+ prompt = "a shiba inu wearing a beret and black turtleneck"
899
+ random_generator = torch.Generator(device=device).manual_seed(1000)
900
+ output = pipeline(
901
+ prompt=prompt,
902
+ width=512,
903
+ height=512,
904
+ generator=random_generator,
905
+ prior_guidance_scale=4,
906
+ prior_num_inference_steps=25,
907
+ decoder_guidance_scale=8,
908
+ decoder_num_inference_steps=50,
909
+ )
910
+
911
+ image = output.images[0]
912
+ image.save("./shiba-inu.jpg")
913
+
914
+ # debug
915
+
916
+ # `pipeline.decoder_pipe` is a regular StableDiffusionImageVariationPipeline instance.
917
+ # It is used to convert clip image embedding to latents, then fed into VAE decoder.
918
+ print(pipeline.decoder_pipe.__class__)
919
+ # <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_image_variation.StableDiffusionImageVariationPipeline'>
920
+
921
+ # this pipeline only use prior module in "kakaobrain/karlo-v1-alpha"
922
+ # It is used to convert clip text embedding to clip image embedding.
923
+ print(pipeline)
924
+ # StableUnCLIPPipeline {
925
+ # "_class_name": "StableUnCLIPPipeline",
926
+ # "_diffusers_version": "0.12.0.dev0",
927
+ # "prior": [
928
+ # "diffusers",
929
+ # "PriorTransformer"
930
+ # ],
931
+ # "prior_scheduler": [
932
+ # "diffusers",
933
+ # "UnCLIPScheduler"
934
+ # ],
935
+ # "text_encoder": [
936
+ # "transformers",
937
+ # "CLIPTextModelWithProjection"
938
+ # ],
939
+ # "tokenizer": [
940
+ # "transformers",
941
+ # "CLIPTokenizer"
942
+ # ]
943
+ # }
944
+
945
+ # pipeline.prior_scheduler is the scheduler used for prior in UnCLIP.
946
+ print(pipeline.prior_scheduler)
947
+ # UnCLIPScheduler {
948
+ # "_class_name": "UnCLIPScheduler",
949
+ # "_diffusers_version": "0.12.0.dev0",
950
+ # "clip_sample": true,
951
+ # "clip_sample_range": 5.0,
952
+ # "num_train_timesteps": 1000,
953
+ # "prediction_type": "sample",
954
+ # "variance_type": "fixed_small_log"
955
+ # }
956
+ ```
957
+
958
+
959
+ `shiba-inu.jpg`
960
+
961
+
962
+ ![shiba-inu](https://user-images.githubusercontent.com/16448529/209185639-6e5ec794-ce9d-4883-aa29-bd6852a2abad.jpg)
963
+
964
+ ### UnCLIP Text Interpolation Pipeline
965
+
966
+ This Diffusion Pipeline takes two prompts and interpolates between the two input prompts using spherical interpolation ( slerp ). The input prompts are converted to text embeddings by the pipeline's text_encoder and the interpolation is done on the resulting text_embeddings over the number of steps specified. Defaults to 5 steps.
967
+
968
+ ```python
969
+ import torch
970
+ from diffusers import DiffusionPipeline
971
+
972
+ device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
973
+
974
+ pipe = DiffusionPipeline.from_pretrained(
975
+ "kakaobrain/karlo-v1-alpha",
976
+ torch_dtype=torch.float16,
977
+ custom_pipeline="unclip_text_interpolation"
978
+ )
979
+ pipe.to(device)
980
+
981
+ start_prompt = "A photograph of an adult lion"
982
+ end_prompt = "A photograph of a lion cub"
983
+ #For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
984
+ generator = torch.Generator(device=device).manual_seed(42)
985
+
986
+ output = pipe(start_prompt, end_prompt, steps = 6, generator = generator, enable_sequential_cpu_offload=False)
987
+
988
+ for i,image in enumerate(output.images):
989
+ img.save('result%s.jpg' % i)
990
+ ```
991
+
992
+ The resulting images in order:-
993
+
994
+ ![result_0](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_0.png)
995
+ ![result_1](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_1.png)
996
+ ![result_2](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_2.png)
997
+ ![result_3](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_3.png)
998
+ ![result_4](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_4.png)
999
+ ![result_5](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_5.png)
1000
+
1001
+ ### UnCLIP Image Interpolation Pipeline
1002
+
1003
+ This Diffusion Pipeline takes two images or an image_embeddings tensor of size 2 and interpolates between their embeddings using spherical interpolation ( slerp ). The input images/image_embeddings are converted to image embeddings by the pipeline's image_encoder and the interpolation is done on the resulting image_embeddings over the number of steps specified. Defaults to 5 steps.
1004
+
1005
+ ```python
1006
+ import torch
1007
+ from diffusers import DiffusionPipeline
1008
+ from PIL import Image
1009
+
1010
+ device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
1011
+ dtype = torch.float16 if torch.cuda.is_available() else torch.bfloat16
1012
+
1013
+ pipe = DiffusionPipeline.from_pretrained(
1014
+ "kakaobrain/karlo-v1-alpha-image-variations",
1015
+ torch_dtype=dtype,
1016
+ custom_pipeline="unclip_image_interpolation"
1017
+ )
1018
+ pipe.to(device)
1019
+
1020
+ images = [Image.open('./starry_night.jpg'), Image.open('./flowers.jpg')]
1021
+ #For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
1022
+ generator = torch.Generator(device=device).manual_seed(42)
1023
+
1024
+ output = pipe(image = images ,steps = 6, generator = generator)
1025
+
1026
+ for i,image in enumerate(output.images):
1027
+ image.save('starry_to_flowers_%s.jpg' % i)
1028
+ ```
1029
+ The original images:-
1030
+
1031
+ ![starry](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_night.jpg)
1032
+ ![flowers](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/flowers.jpg)
1033
+
1034
+ The resulting images in order:-
1035
+
1036
+ ![result0](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_0.png)
1037
+ ![result1](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_1.png)
1038
+ ![result2](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_2.png)
1039
+ ![result3](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_3.png)
1040
+ ![result4](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_4.png)
1041
+ ![result5](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_5.png)
1042
+
1043
+ ### DDIM Noise Comparative Analysis Pipeline
1044
+ #### **Research question: What visual concepts do the diffusion models learn from each noise level during training?**
1045
+ The [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227) paper proposed an approach to answer the above question, which is their second contribution.
1046
+ The approach consists of the following steps:
1047
+
1048
+ 1. The input is an image x0.
1049
+ 2. Perturb it to xt using a diffusion process q(xt|x0).
1050
+ - `strength` is a value between 0.0 and 1.0, that controls the amount of noise that is added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input.
1051
+ 3. Reconstruct the image with the learned denoising process pθ(ˆx0|xt).
1052
+ 4. Compare x0 and ˆx0 among various t to show how each step contributes to the sample.
1053
+ The authors used [openai/guided-diffusion](https://github.com/openai/guided-diffusion) model to denoise images in FFHQ dataset. This pipeline extends their second contribution by investigating DDIM on any input image.
1054
+
1055
+ ```python
1056
+ import torch
1057
+ from PIL import Image
1058
+ import numpy as np
1059
+
1060
+ image_path = "path/to/your/image" # images from CelebA-HQ might be better
1061
+ image_pil = Image.open(image_path)
1062
+ image_name = image_path.split("/")[-1].split(".")[0]
1063
+
1064
+ device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
1065
+ pipe = DiffusionPipeline.from_pretrained(
1066
+ "google/ddpm-ema-celebahq-256",
1067
+ custom_pipeline="ddim_noise_comparative_analysis",
1068
+ )
1069
+ pipe = pipe.to(device)
1070
+
1071
+ for strength in np.linspace(0.1, 1, 25):
1072
+ denoised_image, latent_timestep = pipe(
1073
+ image_pil, strength=strength, return_dict=False
1074
+ )
1075
+ denoised_image = denoised_image[0]
1076
+ denoised_image.save(
1077
+ f"noise_comparative_analysis_{image_name}_{latent_timestep}.png"
1078
+ )
1079
+ ```
1080
+
1081
+ Here is the result of this pipeline (which is DDIM) on CelebA-HQ dataset.
1082
+
1083
+ ![noise-comparative-analysis](https://user-images.githubusercontent.com/67547213/224677066-4474b2ed-56ab-4c27-87c6-de3c0255eb9c.jpeg)
1084
+
1085
+ ### CLIP Guided Img2Img Stable Diffusion
1086
+
1087
+ CLIP guided Img2Img stable diffusion can help to generate more realistic images with an initial image
1088
+ by guiding stable diffusion at every denoising step with an additional CLIP model.
1089
+
1090
+ The following code requires roughly 12GB of GPU RAM.
1091
+
1092
+ ```python
1093
+ from io import BytesIO
1094
+ import requests
1095
+ import torch
1096
+ from diffusers import DiffusionPipeline
1097
+ from PIL import Image
1098
+ from transformers import CLIPFeatureExtractor, CLIPModel
1099
+ feature_extractor = CLIPFeatureExtractor.from_pretrained(
1100
+ "laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
1101
+ )
1102
+ clip_model = CLIPModel.from_pretrained(
1103
+ "laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16
1104
+ )
1105
+ guided_pipeline = DiffusionPipeline.from_pretrained(
1106
+ "CompVis/stable-diffusion-v1-4",
1107
+ # custom_pipeline="clip_guided_stable_diffusion",
1108
+ custom_pipeline="/home/njindal/diffusers/examples/community/clip_guided_stable_diffusion.py",
1109
+ clip_model=clip_model,
1110
+ feature_extractor=feature_extractor,
1111
+ torch_dtype=torch.float16,
1112
+ )
1113
+ guided_pipeline.enable_attention_slicing()
1114
+ guided_pipeline = guided_pipeline.to("cuda")
1115
+ prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
1116
+ url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
1117
+ response = requests.get(url)
1118
+ init_image = Image.open(BytesIO(response.content)).convert("RGB")
1119
+ image = guided_pipeline(
1120
+ prompt=prompt,
1121
+ num_inference_steps=30,
1122
+ image=init_image,
1123
+ strength=0.75,
1124
+ guidance_scale=7.5,
1125
+ clip_guidance_scale=100,
1126
+ num_cutouts=4,
1127
+ use_cutouts=False,
1128
+ ).images[0]
1129
+ display(image)
1130
+ ```
1131
+
1132
+ Init Image
1133
+
1134
+ ![img2img_init_clip_guidance](https://huggingface.co/datasets/njindal/images/resolve/main/clip_guided_img2img_init.jpg)
1135
+
1136
+ Output Image
1137
+
1138
+ ![img2img_clip_guidance](https://huggingface.co/datasets/njindal/images/resolve/main/clip_guided_img2img.jpg)
1139
+
1140
+ ### TensorRT Text2Image Stable Diffusion Pipeline
1141
+
1142
+ The TensorRT Pipeline can be used to accelerate the Text2Image Stable Diffusion Inference run.
1143
+
1144
+ NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
1145
+
1146
+ ```python
1147
+ import torch
1148
+ from diffusers import DDIMScheduler
1149
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipeline
1150
+
1151
+ # Use the DDIMScheduler scheduler here instead
1152
+ scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-2-1",
1153
+ subfolder="scheduler")
1154
+
1155
+ pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1",
1156
+ custom_pipeline="stable_diffusion_tensorrt_txt2img",
1157
+ revision='fp16',
1158
+ torch_dtype=torch.float16,
1159
+ scheduler=scheduler,)
1160
+
1161
+ # re-use cached folder to save ONNX models and TensorRT Engines
1162
+ pipe.set_cached_folder("stabilityai/stable-diffusion-2-1", revision='fp16',)
1163
+
1164
+ pipe = pipe.to("cuda")
1165
+
1166
+ prompt = "a beautiful photograph of Mt. Fuji during cherry blossom"
1167
+ image = pipe(prompt).images[0]
1168
+ image.save('tensorrt_mt_fuji.png')
1169
+ ```
1170
+
1171
+ ### EDICT Image Editing Pipeline
1172
+
1173
+ This pipeline implements the text-guided image editing approach from the paper [EDICT: Exact Diffusion Inversion via Coupled Transformations](https://arxiv.org/abs/2211.12446). You have to pass:
1174
+ - (`PIL`) `image` you want to edit.
1175
+ - `base_prompt`: the text prompt describing the current image (before editing).
1176
+ - `target_prompt`: the text prompt describing with the edits.
1177
+
1178
+ ```python
1179
+ from diffusers import DiffusionPipeline, DDIMScheduler
1180
+ from transformers import CLIPTextModel
1181
+ import torch, PIL, requests
1182
+ from io import BytesIO
1183
+ from IPython.display import display
1184
+
1185
+ def center_crop_and_resize(im):
1186
+
1187
+ width, height = im.size
1188
+ d = min(width, height)
1189
+ left = (width - d) / 2
1190
+ upper = (height - d) / 2
1191
+ right = (width + d) / 2
1192
+ lower = (height + d) / 2
1193
+
1194
+ return im.crop((left, upper, right, lower)).resize((512, 512))
1195
+
1196
+ torch_dtype = torch.float16
1197
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
1198
+
1199
+ # scheduler and text_encoder param values as in the paper
1200
+ scheduler = DDIMScheduler(
1201
+ num_train_timesteps=1000,
1202
+ beta_start=0.00085,
1203
+ beta_end=0.012,
1204
+ beta_schedule="scaled_linear",
1205
+ set_alpha_to_one=False,
1206
+ clip_sample=False,
1207
+ )
1208
+
1209
+ text_encoder = CLIPTextModel.from_pretrained(
1210
+ pretrained_model_name_or_path="openai/clip-vit-large-patch14",
1211
+ torch_dtype=torch_dtype,
1212
+ )
1213
+
1214
+ # initialize pipeline
1215
+ pipeline = DiffusionPipeline.from_pretrained(
1216
+ pretrained_model_name_or_path="CompVis/stable-diffusion-v1-4",
1217
+ custom_pipeline="edict_pipeline",
1218
+ revision="fp16",
1219
+ scheduler=scheduler,
1220
+ text_encoder=text_encoder,
1221
+ leapfrog_steps=True,
1222
+ torch_dtype=torch_dtype,
1223
+ ).to(device)
1224
+
1225
+ # download image
1226
+ image_url = "https://huggingface.co/datasets/Joqsan/images/resolve/main/imagenet_dog_1.jpeg"
1227
+ response = requests.get(image_url)
1228
+ image = PIL.Image.open(BytesIO(response.content))
1229
+
1230
+ # preprocess it
1231
+ cropped_image = center_crop_and_resize(image)
1232
+
1233
+ # define the prompts
1234
+ base_prompt = "A dog"
1235
+ target_prompt = "A golden retriever"
1236
+
1237
+ # run the pipeline
1238
+ result_image = pipeline(
1239
+ base_prompt=base_prompt,
1240
+ target_prompt=target_prompt,
1241
+ image=cropped_image,
1242
+ )
1243
+
1244
+ display(result_image)
1245
+ ```
1246
+
1247
+ Init Image
1248
+
1249
+ ![img2img_init_edict_text_editing](https://huggingface.co/datasets/Joqsan/images/resolve/main/imagenet_dog_1.jpeg)
1250
+
1251
+ Output Image
1252
+
1253
+ ![img2img_edict_text_editing](https://huggingface.co/datasets/Joqsan/images/resolve/main/imagenet_dog_1_cropped_generated.png)
1254
+
1255
+ ### Stable Diffusion RePaint
1256
+
1257
+ This pipeline uses the [RePaint](https://arxiv.org/abs/2201.09865) logic on the latent space of stable diffusion. It can
1258
+ be used similarly to other image inpainting pipelines but does not rely on a specific inpainting model. This means you can use
1259
+ models that are not specifically created for inpainting.
1260
+
1261
+ Make sure to use the ```RePaintScheduler``` as shown in the example below.
1262
+
1263
+ Disclaimer: The mask gets transferred into latent space, this may lead to unexpected changes on the edge of the masked part.
1264
+ The inference time is a lot slower.
1265
+
1266
+ ```py
1267
+ import PIL
1268
+ import requests
1269
+ import torch
1270
+ from io import BytesIO
1271
+ from diffusers import StableDiffusionPipeline, RePaintScheduler
1272
+ def download_image(url):
1273
+ response = requests.get(url)
1274
+ return PIL.Image.open(BytesIO(response.content)).convert("RGB")
1275
+ img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
1276
+ mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
1277
+ init_image = download_image(img_url).resize((512, 512))
1278
+ mask_image = download_image(mask_url).resize((512, 512))
1279
+ mask_image = PIL.ImageOps.invert(mask_image)
1280
+ pipe = StableDiffusionPipeline.from_pretrained(
1281
+ "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, custom_pipeline="stable_diffusion_repaint",
1282
+ )
1283
+ pipe.scheduler = RePaintScheduler.from_config(pipe.scheduler.config)
1284
+ pipe = pipe.to("cuda")
1285
+ prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
1286
+ image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
1287
+ ```
1288
+
1289
+ ### TensorRT Image2Image Stable Diffusion Pipeline
1290
+
1291
+ The TensorRT Pipeline can be used to accelerate the Image2Image Stable Diffusion Inference run.
1292
+
1293
+ NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
1294
+
1295
+ ```python
1296
+ import requests
1297
+ from io import BytesIO
1298
+ from PIL import Image
1299
+ import torch
1300
+ from diffusers import DDIMScheduler
1301
+ from diffusers.pipelines.stable_diffusion import StableDiffusionImg2ImgPipeline
1302
+
1303
+ # Use the DDIMScheduler scheduler here instead
1304
+ scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-2-1",
1305
+ subfolder="scheduler")
1306
+
1307
+
1308
+ pipe = StableDiffusionImg2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-1",
1309
+ custom_pipeline="stable_diffusion_tensorrt_img2img",
1310
+ revision='fp16',
1311
+ torch_dtype=torch.float16,
1312
+ scheduler=scheduler,)
1313
+
1314
+ # re-use cached folder to save ONNX models and TensorRT Engines
1315
+ pipe.set_cached_folder("stabilityai/stable-diffusion-2-1", revision='fp16',)
1316
+
1317
+ pipe = pipe.to("cuda")
1318
+
1319
+ url = "https://pajoca.com/wp-content/uploads/2022/09/tekito-yamakawa-1.png"
1320
+ response = requests.get(url)
1321
+ input_image = Image.open(BytesIO(response.content)).convert("RGB")
1322
+
1323
+ prompt = "photorealistic new zealand hills"
1324
+ image = pipe(prompt, image=input_image, strength=0.75,).images[0]
1325
+ image.save('tensorrt_img2img_new_zealand_hills.png')
1326
+ ```
1327
+
1328
+ ### Stable Diffusion Reference
1329
+
1330
+ This pipeline uses the Reference Control. Refer to the [sd-webui-controlnet discussion: Reference-only Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1236)[sd-webui-controlnet discussion: Reference-adain Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1280).
1331
+
1332
+ Based on [this issue](https://github.com/huggingface/diffusers/issues/3566),
1333
+ - `EulerAncestralDiscreteScheduler` got poor results.
1334
+
1335
+ ```py
1336
+ import torch
1337
+ from diffusers import UniPCMultistepScheduler
1338
+ from diffusers.utils import load_image
1339
+
1340
+ input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
1341
+
1342
+ pipe = StableDiffusionReferencePipeline.from_pretrained(
1343
+ "runwayml/stable-diffusion-v1-5",
1344
+ safety_checker=None,
1345
+ torch_dtype=torch.float16
1346
+ ).to('cuda:0')
1347
+
1348
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
1349
+
1350
+ result_img = pipe(ref_image=input_image,
1351
+ prompt="1girl",
1352
+ num_inference_steps=20,
1353
+ reference_attn=True,
1354
+ reference_adain=True).images[0]
1355
+ ```
1356
+
1357
+ Reference Image
1358
+
1359
+ ![reference_image](https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png)
1360
+
1361
+ Output Image of `reference_attn=True` and `reference_adain=False`
1362
+
1363
+ ![output_image](https://github.com/huggingface/diffusers/assets/24734142/813b5c6a-6d89-46ba-b7a4-2624e240eea5)
1364
+
1365
+ Output Image of `reference_attn=False` and `reference_adain=True`
1366
+
1367
+ ![output_image](https://github.com/huggingface/diffusers/assets/24734142/ffc90339-9ef0-4c4d-a544-135c3e5644da)
1368
+
1369
+ Output Image of `reference_attn=True` and `reference_adain=True`
1370
+
1371
+ ![output_image](https://github.com/huggingface/diffusers/assets/24734142/3c5255d6-867d-4d35-b202-8dfd30cc6827)
1372
+
1373
+ ### Stable Diffusion ControlNet Reference
1374
+
1375
+ This pipeline uses the Reference Control with ControlNet. Refer to the [sd-webui-controlnet discussion: Reference-only Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1236)[sd-webui-controlnet discussion: Reference-adain Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1280).
1376
+
1377
+ Based on [this issue](https://github.com/huggingface/diffusers/issues/3566),
1378
+ - `EulerAncestralDiscreteScheduler` got poor results.
1379
+ - `guess_mode=True` works well for ControlNet v1.1
1380
+
1381
+ ```py
1382
+ import cv2
1383
+ import torch
1384
+ import numpy as np
1385
+ from PIL import Image
1386
+ from diffusers import UniPCMultistepScheduler
1387
+ from diffusers.utils import load_image
1388
+
1389
+ input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
1390
+
1391
+ # get canny image
1392
+ image = cv2.Canny(np.array(input_image), 100, 200)
1393
+ image = image[:, :, None]
1394
+ image = np.concatenate([image, image, image], axis=2)
1395
+ canny_image = Image.fromarray(image)
1396
+
1397
+ controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
1398
+ pipe = StableDiffusionControlNetReferencePipeline.from_pretrained(
1399
+ "runwayml/stable-diffusion-v1-5",
1400
+ controlnet=controlnet,
1401
+ safety_checker=None,
1402
+ torch_dtype=torch.float16
1403
+ ).to('cuda:0')
1404
+
1405
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
1406
+
1407
+ result_img = pipe(ref_image=input_image,
1408
+ prompt="1girl",
1409
+ image=canny_image,
1410
+ num_inference_steps=20,
1411
+ reference_attn=True,
1412
+ reference_adain=True).images[0]
1413
+ ```
1414
+
1415
+ Reference Image
1416
+
1417
+ ![reference_image](https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png)
1418
+
1419
+ Output Image
1420
+
1421
+ ![output_image](https://github.com/huggingface/diffusers/assets/24734142/7b9a5830-f173-4b92-b0cf-73d0e9c01d60)
1422
+
1423
+
1424
+ ### Stable Diffusion on IPEX
1425
+
1426
+ This diffusion pipeline aims to accelarate the inference of Stable-Diffusion on Intel Xeon CPUs with BF16/FP32 precision using [IPEX](https://github.com/intel/intel-extension-for-pytorch).
1427
+
1428
+ To use this pipeline, you need to:
1429
+ 1. Install [IPEX](https://github.com/intel/intel-extension-for-pytorch)
1430
+
1431
+ **Note:** For each PyTorch release, there is a corresponding release of the IPEX. Here is the mapping relationship. It is recommended to install Pytorch/IPEX2.0 to get the best performance.
1432
+
1433
+ |PyTorch Version|IPEX Version|
1434
+ |--|--|
1435
+ |[v2.0.\*](https://github.com/pytorch/pytorch/tree/v2.0.1 "v2.0.1")|[v2.0.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v2.0.100+cpu)|
1436
+ |[v1.13.\*](https://github.com/pytorch/pytorch/tree/v1.13.0 "v1.13.0")|[v1.13.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.100+cpu)|
1437
+
1438
+ You can simply use pip to install IPEX with the latest version.
1439
+ ```python
1440
+ python -m pip install intel_extension_for_pytorch
1441
+ ```
1442
+ **Note:** To install a specific version, run with the following command:
1443
+ ```
1444
+ python -m pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
1445
+ ```
1446
+
1447
+ 2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX accelaration. Supported inference datatypes are Float32 and BFloat16.
1448
+
1449
+ **Note:** The setting of generated image height/width for `prepare_for_ipex()` should be same as the setting of pipeline inference.
1450
+ ```python
1451
+ pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_ipex")
1452
+ # For Float32
1453
+ pipe.prepare_for_ipex(prompt, dtype=torch.float32, height=512, width=512) #value of image height/width should be consistent with the pipeline inference
1454
+ # For BFloat16
1455
+ pipe.prepare_for_ipex(prompt, dtype=torch.bfloat16, height=512, width=512) #value of image height/width should be consistent with the pipeline inference
1456
+ ```
1457
+
1458
+ Then you can use the ipex pipeline in a similar way to the default stable diffusion pipeline.
1459
+ ```python
1460
+ # For Float32
1461
+ image = pipe(prompt, num_inference_steps=20, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()'
1462
+ # For BFloat16
1463
+ with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
1464
+ image = pipe(prompt, num_inference_steps=20, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()'
1465
+ ```
1466
+
1467
+ The following code compares the performance of the original stable diffusion pipeline with the ipex-optimized pipeline.
1468
+
1469
+ ```python
1470
+ import torch
1471
+ import intel_extension_for_pytorch as ipex
1472
+ from diffusers import StableDiffusionPipeline
1473
+ import time
1474
+
1475
+ prompt = "sailing ship in storm by Rembrandt"
1476
+ model_id = "runwayml/stable-diffusion-v1-5"
1477
+ # Helper function for time evaluation
1478
+ def elapsed_time(pipeline, nb_pass=3, num_inference_steps=20):
1479
+ # warmup
1480
+ for _ in range(2):
1481
+ images = pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512).images
1482
+ #time evaluation
1483
+ start = time.time()
1484
+ for _ in range(nb_pass):
1485
+ pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512)
1486
+ end = time.time()
1487
+ return (end - start) / nb_pass
1488
+
1489
+ ############## bf16 inference performance ###############
1490
+
1491
+ # 1. IPEX Pipeline initialization
1492
+ pipe = DiffusionPipeline.from_pretrained(model_id, custom_pipeline="stable_diffusion_ipex")
1493
+ pipe.prepare_for_ipex(prompt, dtype=torch.bfloat16, height=512, width=512)
1494
+
1495
+ # 2. Original Pipeline initialization
1496
+ pipe2 = StableDiffusionPipeline.from_pretrained(model_id)
1497
+
1498
+ # 3. Compare performance between Original Pipeline and IPEX Pipeline
1499
+ with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
1500
+ latency = elapsed_time(pipe)
1501
+ print("Latency of StableDiffusionIPEXPipeline--bf16", latency)
1502
+ latency = elapsed_time(pipe2)
1503
+ print("Latency of StableDiffusionPipeline--bf16",latency)
1504
+
1505
+ ############## fp32 inference performance ###############
1506
+
1507
+ # 1. IPEX Pipeline initialization
1508
+ pipe3 = DiffusionPipeline.from_pretrained(model_id, custom_pipeline="stable_diffusion_ipex")
1509
+ pipe3.prepare_for_ipex(prompt, dtype=torch.float32, height=512, width=512)
1510
+
1511
+ # 2. Original Pipeline initialization
1512
+ pipe4 = StableDiffusionPipeline.from_pretrained(model_id)
1513
+
1514
+ # 3. Compare performance between Original Pipeline and IPEX Pipeline
1515
+ latency = elapsed_time(pipe3)
1516
+ print("Latency of StableDiffusionIPEXPipeline--fp32", latency)
1517
+ latency = elapsed_time(pipe4)
1518
+ print("Latency of StableDiffusionPipeline--fp32",latency)
1519
+
1520
+ ```
1521
+
1522
+ ### CLIP Guided Images Mixing With Stable Diffusion
1523
+
1524
+ ![clip_guided_images_mixing_examples](https://huggingface.co/datasets/TheDenk/images_mixing/resolve/main/main.png)
1525
+
1526
+ CLIP guided stable diffusion images mixing pipline allows to combine two images using standard diffusion models.
1527
+ This approach is using (optional) CoCa model to avoid writing image description.
1528
+ [More code examples](https://github.com/TheDenk/images_mixing)
1529
+
1530
+ ## Example Images Mixing (with CoCa)
1531
+ ```python
1532
+ import requests
1533
+ from io import BytesIO
1534
+
1535
+ import PIL
1536
+ import torch
1537
+ import open_clip
1538
+ from open_clip import SimpleTokenizer
1539
+ from diffusers import DiffusionPipeline
1540
+ from transformers import CLIPFeatureExtractor, CLIPModel
1541
+
1542
+
1543
+ def download_image(url):
1544
+ response = requests.get(url)
1545
+ return PIL.Image.open(BytesIO(response.content)).convert("RGB")
1546
+
1547
+ # Loading additional models
1548
+ feature_extractor = CLIPFeatureExtractor.from_pretrained(
1549
+ "laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
1550
+ )
1551
+ clip_model = CLIPModel.from_pretrained(
1552
+ "laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16
1553
+ )
1554
+ coca_model = open_clip.create_model('coca_ViT-L-14', pretrained='laion2B-s13B-b90k').to('cuda')
1555
+ coca_model.dtype = torch.float16
1556
+ coca_transform = open_clip.image_transform(
1557
+ coca_model.visual.image_size,
1558
+ is_train = False,
1559
+ mean = getattr(coca_model.visual, 'image_mean', None),
1560
+ std = getattr(coca_model.visual, 'image_std', None),
1561
+ )
1562
+ coca_tokenizer = SimpleTokenizer()
1563
+
1564
+ # Pipline creating
1565
+ mixing_pipeline = DiffusionPipeline.from_pretrained(
1566
+ "CompVis/stable-diffusion-v1-4",
1567
+ custom_pipeline="clip_guided_images_mixing_stable_diffusion",
1568
+ clip_model=clip_model,
1569
+ feature_extractor=feature_extractor,
1570
+ coca_model=coca_model,
1571
+ coca_tokenizer=coca_tokenizer,
1572
+ coca_transform=coca_transform,
1573
+ torch_dtype=torch.float16,
1574
+ )
1575
+ mixing_pipeline.enable_attention_slicing()
1576
+ mixing_pipeline = mixing_pipeline.to("cuda")
1577
+
1578
+ # Pipline running
1579
+ generator = torch.Generator(device="cuda").manual_seed(17)
1580
+
1581
+ def download_image(url):
1582
+ response = requests.get(url)
1583
+ return PIL.Image.open(BytesIO(response.content)).convert("RGB")
1584
+
1585
+ content_image = download_image("https://huggingface.co/datasets/TheDenk/images_mixing/resolve/main/boromir.jpg")
1586
+ style_image = download_image("https://huggingface.co/datasets/TheDenk/images_mixing/resolve/main/gigachad.jpg")
1587
+
1588
+ pipe_images = mixing_pipeline(
1589
+ num_inference_steps=50,
1590
+ content_image=content_image,
1591
+ style_image=style_image,
1592
+ noise_strength=0.65,
1593
+ slerp_latent_style_strength=0.9,
1594
+ slerp_prompt_style_strength=0.1,
1595
+ slerp_clip_image_style_strength=0.1,
1596
+ guidance_scale=9.0,
1597
+ batch_size=1,
1598
+ clip_guidance_scale=100,
1599
+ generator=generator,
1600
+ ).images
1601
+ ```
1602
+
1603
+ ![image_mixing_result](https://huggingface.co/datasets/TheDenk/images_mixing/resolve/main/boromir_gigachad.png)
1604
+
1605
+ ### Stable Diffusion Mixture Tiling
1606
+
1607
+ This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
1608
+
1609
+ ```python
1610
+ from diffusers import LMSDiscreteScheduler, DiffusionPipeline
1611
+
1612
+ # Creater scheduler and model (similar to StableDiffusionPipeline)
1613
+ scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
1614
+ pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler, custom_pipeline="mixture_tiling")
1615
+ pipeline.to("cuda")
1616
+
1617
+ # Mixture of Diffusers generation
1618
+ image = pipeline(
1619
+ prompt=[[
1620
+ "A charming house in the countryside, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece",
1621
+ "A dirt road in the countryside crossing pastures, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece",
1622
+ "An old and rusty giant robot lying on a dirt road, by jakub rozalski, dark sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece"
1623
+ ]],
1624
+ tile_height=640,
1625
+ tile_width=640,
1626
+ tile_row_overlap=0,
1627
+ tile_col_overlap=256,
1628
+ guidance_scale=8,
1629
+ seed=7178915308,
1630
+ num_inference_steps=50,
1631
+ )["images"][0]
1632
+ ```
1633
+ ![mixture_tiling_results](https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/mixture_tiling.png)
1634
+
1635
+ ### TensorRT Inpainting Stable Diffusion Pipeline
1636
+
1637
+ The TensorRT Pipeline can be used to accelerate the Inpainting Stable Diffusion Inference run.
1638
+
1639
+ NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
1640
+
1641
+ ```python
1642
+ import requests
1643
+ from io import BytesIO
1644
+ from PIL import Image
1645
+ import torch
1646
+ from diffusers import PNDMScheduler
1647
+ from diffusers.pipelines.stable_diffusion import StableDiffusionImg2ImgPipeline
1648
+
1649
+ # Use the PNDMScheduler scheduler here instead
1650
+ scheduler = PNDMScheduler.from_pretrained("stabilityai/stable-diffusion-2-inpainting", subfolder="scheduler")
1651
+
1652
+
1653
+ pipe = StableDiffusionImg2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-inpainting",
1654
+ custom_pipeline="stable_diffusion_tensorrt_inpaint",
1655
+ revision='fp16',
1656
+ torch_dtype=torch.float16,
1657
+ scheduler=scheduler,
1658
+ )
1659
+
1660
+ # re-use cached folder to save ONNX models and TensorRT Engines
1661
+ pipe.set_cached_folder("stabilityai/stable-diffusion-2-inpainting", revision='fp16',)
1662
+
1663
+ pipe = pipe.to("cuda")
1664
+
1665
+ url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
1666
+ response = requests.get(url)
1667
+ input_image = Image.open(BytesIO(response.content)).convert("RGB")
1668
+
1669
+ mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
1670
+ response = requests.get(mask_url)
1671
+ mask_image = Image.open(BytesIO(response.content)).convert("RGB")
1672
+
1673
+ prompt = "a mecha robot sitting on a bench"
1674
+ image = pipe(prompt, image=input_image, mask_image=mask_image, strength=0.75,).images[0]
1675
+ image.save('tensorrt_inpaint_mecha_robot.png')
1676
+ ```
1677
+
1678
+ ### Stable Diffusion Mixture Canvas
1679
+
1680
+ This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
1681
+
1682
+ ```python
1683
+ from PIL import Image
1684
+ from diffusers import LMSDiscreteScheduler, DiffusionPipeline
1685
+ from diffusers.pipelines.pipeline_utils import Image2ImageRegion, Text2ImageRegion, preprocess_image
1686
+
1687
+
1688
+ # Load and preprocess guide image
1689
+ iic_image = preprocess_image(Image.open("input_image.png").convert("RGB"))
1690
+
1691
+ # Creater scheduler and model (similar to StableDiffusionPipeline)
1692
+ scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
1693
+ pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler).to("cuda:0", custom_pipeline="mixture_canvas")
1694
+ pipeline.to("cuda")
1695
+
1696
+ # Mixture of Diffusers generation
1697
+ output = pipeline(
1698
+ canvas_height=800,
1699
+ canvas_width=352,
1700
+ regions=[
1701
+ Text2ImageRegion(0, 800, 0, 352, guidance_scale=8,
1702
+ prompt=f"best quality, masterpiece, WLOP, sakimichan, art contest winner on pixiv, 8K, intricate details, wet effects, rain drops, ethereal, mysterious, futuristic, UHD, HDR, cinematic lighting, in a beautiful forest, rainy day, award winning, trending on artstation, beautiful confident cheerful young woman, wearing a futuristic sleeveless dress, ultra beautiful detailed eyes, hyper-detailed face, complex, perfect, model,  textured, chiaroscuro, professional make-up, realistic, figure in frame, "),
1703
+ Image2ImageRegion(352-800, 352, 0, 352, reference_image=iic_image, strength=1.0),
1704
+ ],
1705
+ num_inference_steps=100,
1706
+ seed=5525475061,
1707
+ )["images"][0]
1708
+ ```
1709
+ ![Input_Image](https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/input_image.png)
1710
+ ![mixture_canvas_results](https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/canvas.png)
1711
+
1712
+
1713
+ ### IADB pipeline
1714
+
1715
+ This pipeline is the implementation of the [α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) paper.
1716
+ It is a simple and minimalist diffusion model.
1717
+
1718
+ The following code shows how to use the IADB pipeline to generate images using a pretrained celebahq-256 model.
1719
+
1720
+ ```python
1721
+
1722
+ pipeline_iadb = DiffusionPipeline.from_pretrained("thomasc4/iadb-celebahq-256", custom_pipeline='iadb')
1723
+
1724
+ pipeline_iadb = pipeline_iadb.to('cuda')
1725
+
1726
+ output = pipeline_iadb(batch_size=4,num_inference_steps=128)
1727
+ for i in range(len(output[0])):
1728
+ plt.imshow(output[0][i])
1729
+ plt.show()
1730
+
1731
+ ```
1732
+
1733
+ Sampling with the IADB formulation is easy, and can be done in a few lines (the pipeline already implements it):
1734
+
1735
+ ```python
1736
+
1737
+ def sample_iadb(model, x0, nb_step):
1738
+ x_alpha = x0
1739
+ for t in range(nb_step):
1740
+ alpha = (t/nb_step)
1741
+ alpha_next =((t+1)/nb_step)
1742
+
1743
+ d = model(x_alpha, torch.tensor(alpha, device=x_alpha.device))['sample']
1744
+ x_alpha = x_alpha + (alpha_next-alpha)*d
1745
+
1746
+ return x_alpha
1747
+
1748
+ ```
1749
+
1750
+ The training loop is also straightforward:
1751
+
1752
+ ```python
1753
+
1754
+ # Training loop
1755
+ while True:
1756
+ x0 = sample_noise()
1757
+ x1 = sample_dataset()
1758
+
1759
+ alpha = torch.rand(batch_size)
1760
+
1761
+ # Blend
1762
+ x_alpha = (1-alpha) * x0 + alpha * x1
1763
+
1764
+ # Loss
1765
+ loss = torch.sum((D(x_alpha, alpha)- (x1-x0))**2)
1766
+ optimizer.zero_grad()
1767
+ loss.backward()
1768
+ optimizer.step()
1769
+ ```
v0.19.2/bit_diffusion.py ADDED
@@ -0,0 +1,264 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Tuple, Union
2
+
3
+ import torch
4
+ from einops import rearrange, reduce
5
+
6
+ from diffusers import DDIMScheduler, DDPMScheduler, DiffusionPipeline, ImagePipelineOutput, UNet2DConditionModel
7
+ from diffusers.schedulers.scheduling_ddim import DDIMSchedulerOutput
8
+ from diffusers.schedulers.scheduling_ddpm import DDPMSchedulerOutput
9
+
10
+
11
+ BITS = 8
12
+
13
+
14
+ # convert to bit representations and back taken from https://github.com/lucidrains/bit-diffusion/blob/main/bit_diffusion/bit_diffusion.py
15
+ def decimal_to_bits(x, bits=BITS):
16
+ """expects image tensor ranging from 0 to 1, outputs bit tensor ranging from -1 to 1"""
17
+ device = x.device
18
+
19
+ x = (x * 255).int().clamp(0, 255)
20
+
21
+ mask = 2 ** torch.arange(bits - 1, -1, -1, device=device)
22
+ mask = rearrange(mask, "d -> d 1 1")
23
+ x = rearrange(x, "b c h w -> b c 1 h w")
24
+
25
+ bits = ((x & mask) != 0).float()
26
+ bits = rearrange(bits, "b c d h w -> b (c d) h w")
27
+ bits = bits * 2 - 1
28
+ return bits
29
+
30
+
31
+ def bits_to_decimal(x, bits=BITS):
32
+ """expects bits from -1 to 1, outputs image tensor from 0 to 1"""
33
+ device = x.device
34
+
35
+ x = (x > 0).int()
36
+ mask = 2 ** torch.arange(bits - 1, -1, -1, device=device, dtype=torch.int32)
37
+
38
+ mask = rearrange(mask, "d -> d 1 1")
39
+ x = rearrange(x, "b (c d) h w -> b c d h w", d=8)
40
+ dec = reduce(x * mask, "b c d h w -> b c h w", "sum")
41
+ return (dec / 255).clamp(0.0, 1.0)
42
+
43
+
44
+ # modified scheduler step functions for clamping the predicted x_0 between -bit_scale and +bit_scale
45
+ def ddim_bit_scheduler_step(
46
+ self,
47
+ model_output: torch.FloatTensor,
48
+ timestep: int,
49
+ sample: torch.FloatTensor,
50
+ eta: float = 0.0,
51
+ use_clipped_model_output: bool = True,
52
+ generator=None,
53
+ return_dict: bool = True,
54
+ ) -> Union[DDIMSchedulerOutput, Tuple]:
55
+ """
56
+ Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
57
+ process from the learned model outputs (most often the predicted noise).
58
+ Args:
59
+ model_output (`torch.FloatTensor`): direct output from learned diffusion model.
60
+ timestep (`int`): current discrete timestep in the diffusion chain.
61
+ sample (`torch.FloatTensor`):
62
+ current instance of sample being created by diffusion process.
63
+ eta (`float`): weight of noise for added noise in diffusion step.
64
+ use_clipped_model_output (`bool`): TODO
65
+ generator: random number generator.
66
+ return_dict (`bool`): option for returning tuple rather than DDIMSchedulerOutput class
67
+ Returns:
68
+ [`~schedulers.scheduling_utils.DDIMSchedulerOutput`] or `tuple`:
69
+ [`~schedulers.scheduling_utils.DDIMSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
70
+ returning a tuple, the first element is the sample tensor.
71
+ """
72
+ if self.num_inference_steps is None:
73
+ raise ValueError(
74
+ "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
75
+ )
76
+
77
+ # See formulas (12) and (16) of DDIM paper https://arxiv.org/pdf/2010.02502.pdf
78
+ # Ideally, read DDIM paper in-detail understanding
79
+
80
+ # Notation (<variable name> -> <name in paper>
81
+ # - pred_noise_t -> e_theta(x_t, t)
82
+ # - pred_original_sample -> f_theta(x_t, t) or x_0
83
+ # - std_dev_t -> sigma_t
84
+ # - eta -> η
85
+ # - pred_sample_direction -> "direction pointing to x_t"
86
+ # - pred_prev_sample -> "x_t-1"
87
+
88
+ # 1. get previous step value (=t-1)
89
+ prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps
90
+
91
+ # 2. compute alphas, betas
92
+ alpha_prod_t = self.alphas_cumprod[timestep]
93
+ alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
94
+
95
+ beta_prod_t = 1 - alpha_prod_t
96
+
97
+ # 3. compute predicted original sample from predicted noise also called
98
+ # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
99
+ pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
100
+
101
+ # 4. Clip "predicted x_0"
102
+ scale = self.bit_scale
103
+ if self.config.clip_sample:
104
+ pred_original_sample = torch.clamp(pred_original_sample, -scale, scale)
105
+
106
+ # 5. compute variance: "sigma_t(η)" -> see formula (16)
107
+ # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1)
108
+ variance = self._get_variance(timestep, prev_timestep)
109
+ std_dev_t = eta * variance ** (0.5)
110
+
111
+ if use_clipped_model_output:
112
+ # the model_output is always re-derived from the clipped x_0 in Glide
113
+ model_output = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5)
114
+
115
+ # 6. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
116
+ pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * model_output
117
+
118
+ # 7. compute x_t without "random noise" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
119
+ prev_sample = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction
120
+
121
+ if eta > 0:
122
+ # randn_like does not support generator https://github.com/pytorch/pytorch/issues/27072
123
+ device = model_output.device if torch.is_tensor(model_output) else "cpu"
124
+ noise = torch.randn(model_output.shape, dtype=model_output.dtype, generator=generator).to(device)
125
+ variance = self._get_variance(timestep, prev_timestep) ** (0.5) * eta * noise
126
+
127
+ prev_sample = prev_sample + variance
128
+
129
+ if not return_dict:
130
+ return (prev_sample,)
131
+
132
+ return DDIMSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample)
133
+
134
+
135
+ def ddpm_bit_scheduler_step(
136
+ self,
137
+ model_output: torch.FloatTensor,
138
+ timestep: int,
139
+ sample: torch.FloatTensor,
140
+ prediction_type="epsilon",
141
+ generator=None,
142
+ return_dict: bool = True,
143
+ ) -> Union[DDPMSchedulerOutput, Tuple]:
144
+ """
145
+ Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
146
+ process from the learned model outputs (most often the predicted noise).
147
+ Args:
148
+ model_output (`torch.FloatTensor`): direct output from learned diffusion model.
149
+ timestep (`int`): current discrete timestep in the diffusion chain.
150
+ sample (`torch.FloatTensor`):
151
+ current instance of sample being created by diffusion process.
152
+ prediction_type (`str`, default `epsilon`):
153
+ indicates whether the model predicts the noise (epsilon), or the samples (`sample`).
154
+ generator: random number generator.
155
+ return_dict (`bool`): option for returning tuple rather than DDPMSchedulerOutput class
156
+ Returns:
157
+ [`~schedulers.scheduling_utils.DDPMSchedulerOutput`] or `tuple`:
158
+ [`~schedulers.scheduling_utils.DDPMSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
159
+ returning a tuple, the first element is the sample tensor.
160
+ """
161
+ t = timestep
162
+
163
+ if model_output.shape[1] == sample.shape[1] * 2 and self.variance_type in ["learned", "learned_range"]:
164
+ model_output, predicted_variance = torch.split(model_output, sample.shape[1], dim=1)
165
+ else:
166
+ predicted_variance = None
167
+
168
+ # 1. compute alphas, betas
169
+ alpha_prod_t = self.alphas_cumprod[t]
170
+ alpha_prod_t_prev = self.alphas_cumprod[t - 1] if t > 0 else self.one
171
+ beta_prod_t = 1 - alpha_prod_t
172
+ beta_prod_t_prev = 1 - alpha_prod_t_prev
173
+
174
+ # 2. compute predicted original sample from predicted noise also called
175
+ # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
176
+ if prediction_type == "epsilon":
177
+ pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
178
+ elif prediction_type == "sample":
179
+ pred_original_sample = model_output
180
+ else:
181
+ raise ValueError(f"Unsupported prediction_type {prediction_type}.")
182
+
183
+ # 3. Clip "predicted x_0"
184
+ scale = self.bit_scale
185
+ if self.config.clip_sample:
186
+ pred_original_sample = torch.clamp(pred_original_sample, -scale, scale)
187
+
188
+ # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
189
+ # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
190
+ pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * self.betas[t]) / beta_prod_t
191
+ current_sample_coeff = self.alphas[t] ** (0.5) * beta_prod_t_prev / beta_prod_t
192
+
193
+ # 5. Compute predicted previous sample µ_t
194
+ # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
195
+ pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample
196
+
197
+ # 6. Add noise
198
+ variance = 0
199
+ if t > 0:
200
+ noise = torch.randn(
201
+ model_output.size(), dtype=model_output.dtype, layout=model_output.layout, generator=generator
202
+ ).to(model_output.device)
203
+ variance = (self._get_variance(t, predicted_variance=predicted_variance) ** 0.5) * noise
204
+
205
+ pred_prev_sample = pred_prev_sample + variance
206
+
207
+ if not return_dict:
208
+ return (pred_prev_sample,)
209
+
210
+ return DDPMSchedulerOutput(prev_sample=pred_prev_sample, pred_original_sample=pred_original_sample)
211
+
212
+
213
+ class BitDiffusion(DiffusionPipeline):
214
+ def __init__(
215
+ self,
216
+ unet: UNet2DConditionModel,
217
+ scheduler: Union[DDIMScheduler, DDPMScheduler],
218
+ bit_scale: Optional[float] = 1.0,
219
+ ):
220
+ super().__init__()
221
+ self.bit_scale = bit_scale
222
+ self.scheduler.step = (
223
+ ddim_bit_scheduler_step if isinstance(scheduler, DDIMScheduler) else ddpm_bit_scheduler_step
224
+ )
225
+
226
+ self.register_modules(unet=unet, scheduler=scheduler)
227
+
228
+ @torch.no_grad()
229
+ def __call__(
230
+ self,
231
+ height: Optional[int] = 256,
232
+ width: Optional[int] = 256,
233
+ num_inference_steps: Optional[int] = 50,
234
+ generator: Optional[torch.Generator] = None,
235
+ batch_size: Optional[int] = 1,
236
+ output_type: Optional[str] = "pil",
237
+ return_dict: bool = True,
238
+ **kwargs,
239
+ ) -> Union[Tuple, ImagePipelineOutput]:
240
+ latents = torch.randn(
241
+ (batch_size, self.unet.config.in_channels, height, width),
242
+ generator=generator,
243
+ )
244
+ latents = decimal_to_bits(latents) * self.bit_scale
245
+ latents = latents.to(self.device)
246
+
247
+ self.scheduler.set_timesteps(num_inference_steps)
248
+
249
+ for t in self.progress_bar(self.scheduler.timesteps):
250
+ # predict the noise residual
251
+ noise_pred = self.unet(latents, t).sample
252
+
253
+ # compute the previous noisy sample x_t -> x_t-1
254
+ latents = self.scheduler.step(noise_pred, t, latents).prev_sample
255
+
256
+ image = bits_to_decimal(latents)
257
+
258
+ if output_type == "pil":
259
+ image = self.numpy_to_pil(image)
260
+
261
+ if not return_dict:
262
+ return (image,)
263
+
264
+ return ImagePipelineOutput(images=image)
v0.19.2/checkpoint_merger.py ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import glob
2
+ import os
3
+ from typing import Dict, List, Union
4
+
5
+ import torch
6
+
7
+ from diffusers.utils import is_safetensors_available
8
+
9
+
10
+ if is_safetensors_available():
11
+ import safetensors.torch
12
+
13
+ from huggingface_hub import snapshot_download
14
+
15
+ from diffusers import DiffusionPipeline, __version__
16
+ from diffusers.schedulers.scheduling_utils import SCHEDULER_CONFIG_NAME
17
+ from diffusers.utils import CONFIG_NAME, DIFFUSERS_CACHE, ONNX_WEIGHTS_NAME, WEIGHTS_NAME
18
+
19
+
20
+ class CheckpointMergerPipeline(DiffusionPipeline):
21
+ """
22
+ A class that that supports merging diffusion models based on the discussion here:
23
+ https://github.com/huggingface/diffusers/issues/877
24
+
25
+ Example usage:-
26
+
27
+ pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="checkpoint_merger.py")
28
+
29
+ merged_pipe = pipe.merge(["CompVis/stable-diffusion-v1-4","prompthero/openjourney"], interp = 'inv_sigmoid', alpha = 0.8, force = True)
30
+
31
+ merged_pipe.to('cuda')
32
+
33
+ prompt = "An astronaut riding a unicycle on Mars"
34
+
35
+ results = merged_pipe(prompt)
36
+
37
+ ## For more details, see the docstring for the merge method.
38
+
39
+ """
40
+
41
+ def __init__(self):
42
+ self.register_to_config()
43
+ super().__init__()
44
+
45
+ def _compare_model_configs(self, dict0, dict1):
46
+ if dict0 == dict1:
47
+ return True
48
+ else:
49
+ config0, meta_keys0 = self._remove_meta_keys(dict0)
50
+ config1, meta_keys1 = self._remove_meta_keys(dict1)
51
+ if config0 == config1:
52
+ print(f"Warning !: Mismatch in keys {meta_keys0} and {meta_keys1}.")
53
+ return True
54
+ return False
55
+
56
+ def _remove_meta_keys(self, config_dict: Dict):
57
+ meta_keys = []
58
+ temp_dict = config_dict.copy()
59
+ for key in config_dict.keys():
60
+ if key.startswith("_"):
61
+ temp_dict.pop(key)
62
+ meta_keys.append(key)
63
+ return (temp_dict, meta_keys)
64
+
65
+ @torch.no_grad()
66
+ def merge(self, pretrained_model_name_or_path_list: List[Union[str, os.PathLike]], **kwargs):
67
+ """
68
+ Returns a new pipeline object of the class 'DiffusionPipeline' with the merged checkpoints(weights) of the models passed
69
+ in the argument 'pretrained_model_name_or_path_list' as a list.
70
+
71
+ Parameters:
72
+ -----------
73
+ pretrained_model_name_or_path_list : A list of valid pretrained model names in the HuggingFace hub or paths to locally stored models in the HuggingFace format.
74
+
75
+ **kwargs:
76
+ Supports all the default DiffusionPipeline.get_config_dict kwargs viz..
77
+
78
+ cache_dir, resume_download, force_download, proxies, local_files_only, use_auth_token, revision, torch_dtype, device_map.
79
+
80
+ alpha - The interpolation parameter. Ranges from 0 to 1. It affects the ratio in which the checkpoints are merged. A 0.8 alpha
81
+ would mean that the first model checkpoints would affect the final result far less than an alpha of 0.2
82
+
83
+ interp - The interpolation method to use for the merging. Supports "sigmoid", "inv_sigmoid", "add_diff" and None.
84
+ Passing None uses the default interpolation which is weighted sum interpolation. For merging three checkpoints, only "add_diff" is supported.
85
+
86
+ force - Whether to ignore mismatch in model_config.json for the current models. Defaults to False.
87
+
88
+ """
89
+ # Default kwargs from DiffusionPipeline
90
+ cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
91
+ resume_download = kwargs.pop("resume_download", False)
92
+ force_download = kwargs.pop("force_download", False)
93
+ proxies = kwargs.pop("proxies", None)
94
+ local_files_only = kwargs.pop("local_files_only", False)
95
+ use_auth_token = kwargs.pop("use_auth_token", None)
96
+ revision = kwargs.pop("revision", None)
97
+ torch_dtype = kwargs.pop("torch_dtype", None)
98
+ device_map = kwargs.pop("device_map", None)
99
+
100
+ alpha = kwargs.pop("alpha", 0.5)
101
+ interp = kwargs.pop("interp", None)
102
+
103
+ print("Received list", pretrained_model_name_or_path_list)
104
+ print(f"Combining with alpha={alpha}, interpolation mode={interp}")
105
+
106
+ checkpoint_count = len(pretrained_model_name_or_path_list)
107
+ # Ignore result from model_index_json comparision of the two checkpoints
108
+ force = kwargs.pop("force", False)
109
+
110
+ # If less than 2 checkpoints, nothing to merge. If more than 3, not supported for now.
111
+ if checkpoint_count > 3 or checkpoint_count < 2:
112
+ raise ValueError(
113
+ "Received incorrect number of checkpoints to merge. Ensure that either 2 or 3 checkpoints are being"
114
+ " passed."
115
+ )
116
+
117
+ print("Received the right number of checkpoints")
118
+ # chkpt0, chkpt1 = pretrained_model_name_or_path_list[0:2]
119
+ # chkpt2 = pretrained_model_name_or_path_list[2] if checkpoint_count == 3 else None
120
+
121
+ # Validate that the checkpoints can be merged
122
+ # Step 1: Load the model config and compare the checkpoints. We'll compare the model_index.json first while ignoring the keys starting with '_'
123
+ config_dicts = []
124
+ for pretrained_model_name_or_path in pretrained_model_name_or_path_list:
125
+ config_dict = DiffusionPipeline.load_config(
126
+ pretrained_model_name_or_path,
127
+ cache_dir=cache_dir,
128
+ resume_download=resume_download,
129
+ force_download=force_download,
130
+ proxies=proxies,
131
+ local_files_only=local_files_only,
132
+ use_auth_token=use_auth_token,
133
+ revision=revision,
134
+ )
135
+ config_dicts.append(config_dict)
136
+
137
+ comparison_result = True
138
+ for idx in range(1, len(config_dicts)):
139
+ comparison_result &= self._compare_model_configs(config_dicts[idx - 1], config_dicts[idx])
140
+ if not force and comparison_result is False:
141
+ raise ValueError("Incompatible checkpoints. Please check model_index.json for the models.")
142
+ print(config_dicts[0], config_dicts[1])
143
+ print("Compatible model_index.json files found")
144
+ # Step 2: Basic Validation has succeeded. Let's download the models and save them into our local files.
145
+ cached_folders = []
146
+ for pretrained_model_name_or_path, config_dict in zip(pretrained_model_name_or_path_list, config_dicts):
147
+ folder_names = [k for k in config_dict.keys() if not k.startswith("_")]
148
+ allow_patterns = [os.path.join(k, "*") for k in folder_names]
149
+ allow_patterns += [
150
+ WEIGHTS_NAME,
151
+ SCHEDULER_CONFIG_NAME,
152
+ CONFIG_NAME,
153
+ ONNX_WEIGHTS_NAME,
154
+ DiffusionPipeline.config_name,
155
+ ]
156
+ requested_pipeline_class = config_dict.get("_class_name")
157
+ user_agent = {"diffusers": __version__, "pipeline_class": requested_pipeline_class}
158
+
159
+ cached_folder = (
160
+ pretrained_model_name_or_path
161
+ if os.path.isdir(pretrained_model_name_or_path)
162
+ else snapshot_download(
163
+ pretrained_model_name_or_path,
164
+ cache_dir=cache_dir,
165
+ resume_download=resume_download,
166
+ proxies=proxies,
167
+ local_files_only=local_files_only,
168
+ use_auth_token=use_auth_token,
169
+ revision=revision,
170
+ allow_patterns=allow_patterns,
171
+ user_agent=user_agent,
172
+ )
173
+ )
174
+ print("Cached Folder", cached_folder)
175
+ cached_folders.append(cached_folder)
176
+
177
+ # Step 3:-
178
+ # Load the first checkpoint as a diffusion pipeline and modify its module state_dict in place
179
+ final_pipe = DiffusionPipeline.from_pretrained(
180
+ cached_folders[0], torch_dtype=torch_dtype, device_map=device_map
181
+ )
182
+ final_pipe.to(self.device)
183
+
184
+ checkpoint_path_2 = None
185
+ if len(cached_folders) > 2:
186
+ checkpoint_path_2 = os.path.join(cached_folders[2])
187
+
188
+ if interp == "sigmoid":
189
+ theta_func = CheckpointMergerPipeline.sigmoid
190
+ elif interp == "inv_sigmoid":
191
+ theta_func = CheckpointMergerPipeline.inv_sigmoid
192
+ elif interp == "add_diff":
193
+ theta_func = CheckpointMergerPipeline.add_difference
194
+ else:
195
+ theta_func = CheckpointMergerPipeline.weighted_sum
196
+
197
+ # Find each module's state dict.
198
+ for attr in final_pipe.config.keys():
199
+ if not attr.startswith("_"):
200
+ checkpoint_path_1 = os.path.join(cached_folders[1], attr)
201
+ if os.path.exists(checkpoint_path_1):
202
+ files = [
203
+ *glob.glob(os.path.join(checkpoint_path_1, "*.safetensors")),
204
+ *glob.glob(os.path.join(checkpoint_path_1, "*.bin")),
205
+ ]
206
+ checkpoint_path_1 = files[0] if len(files) > 0 else None
207
+ if len(cached_folders) < 3:
208
+ checkpoint_path_2 = None
209
+ else:
210
+ checkpoint_path_2 = os.path.join(cached_folders[2], attr)
211
+ if os.path.exists(checkpoint_path_2):
212
+ files = [
213
+ *glob.glob(os.path.join(checkpoint_path_2, "*.safetensors")),
214
+ *glob.glob(os.path.join(checkpoint_path_2, "*.bin")),
215
+ ]
216
+ checkpoint_path_2 = files[0] if len(files) > 0 else None
217
+ # For an attr if both checkpoint_path_1 and 2 are None, ignore.
218
+ # If atleast one is present, deal with it according to interp method, of course only if the state_dict keys match.
219
+ if checkpoint_path_1 is None and checkpoint_path_2 is None:
220
+ print(f"Skipping {attr}: not present in 2nd or 3d model")
221
+ continue
222
+ try:
223
+ module = getattr(final_pipe, attr)
224
+ if isinstance(module, bool): # ignore requires_safety_checker boolean
225
+ continue
226
+ theta_0 = getattr(module, "state_dict")
227
+ theta_0 = theta_0()
228
+
229
+ update_theta_0 = getattr(module, "load_state_dict")
230
+ theta_1 = (
231
+ safetensors.torch.load_file(checkpoint_path_1)
232
+ if (is_safetensors_available() and checkpoint_path_1.endswith(".safetensors"))
233
+ else torch.load(checkpoint_path_1, map_location="cpu")
234
+ )
235
+ theta_2 = None
236
+ if checkpoint_path_2:
237
+ theta_2 = (
238
+ safetensors.torch.load_file(checkpoint_path_2)
239
+ if (is_safetensors_available() and checkpoint_path_2.endswith(".safetensors"))
240
+ else torch.load(checkpoint_path_2, map_location="cpu")
241
+ )
242
+
243
+ if not theta_0.keys() == theta_1.keys():
244
+ print(f"Skipping {attr}: key mismatch")
245
+ continue
246
+ if theta_2 and not theta_1.keys() == theta_2.keys():
247
+ print(f"Skipping {attr}:y mismatch")
248
+ except Exception as e:
249
+ print(f"Skipping {attr} do to an unexpected error: {str(e)}")
250
+ continue
251
+ print(f"MERGING {attr}")
252
+
253
+ for key in theta_0.keys():
254
+ if theta_2:
255
+ theta_0[key] = theta_func(theta_0[key], theta_1[key], theta_2[key], alpha)
256
+ else:
257
+ theta_0[key] = theta_func(theta_0[key], theta_1[key], None, alpha)
258
+
259
+ del theta_1
260
+ del theta_2
261
+ update_theta_0(theta_0)
262
+
263
+ del theta_0
264
+ return final_pipe
265
+
266
+ @staticmethod
267
+ def weighted_sum(theta0, theta1, theta2, alpha):
268
+ return ((1 - alpha) * theta0) + (alpha * theta1)
269
+
270
+ # Smoothstep (https://en.wikipedia.org/wiki/Smoothstep)
271
+ @staticmethod
272
+ def sigmoid(theta0, theta1, theta2, alpha):
273
+ alpha = alpha * alpha * (3 - (2 * alpha))
274
+ return theta0 + ((theta1 - theta0) * alpha)
275
+
276
+ # Inverse Smoothstep (https://en.wikipedia.org/wiki/Smoothstep)
277
+ @staticmethod
278
+ def inv_sigmoid(theta0, theta1, theta2, alpha):
279
+ import math
280
+
281
+ alpha = 0.5 - math.sin(math.asin(1.0 - 2.0 * alpha) / 3.0)
282
+ return theta0 + ((theta1 - theta0) * alpha)
283
+
284
+ @staticmethod
285
+ def add_difference(theta0, theta1, theta2, alpha):
286
+ return theta0 + (theta1 - theta2) * (1.0 - alpha)
v0.19.2/clip_guided_images_mixing_stable_diffusion.py ADDED
@@ -0,0 +1,456 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ import inspect
3
+ from typing import Optional, Union
4
+
5
+ import numpy as np
6
+ import PIL
7
+ import torch
8
+ from torch.nn import functional as F
9
+ from torchvision import transforms
10
+ from transformers import CLIPFeatureExtractor, CLIPModel, CLIPTextModel, CLIPTokenizer
11
+
12
+ from diffusers import (
13
+ AutoencoderKL,
14
+ DDIMScheduler,
15
+ DiffusionPipeline,
16
+ DPMSolverMultistepScheduler,
17
+ LMSDiscreteScheduler,
18
+ PNDMScheduler,
19
+ UNet2DConditionModel,
20
+ )
21
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
22
+ from diffusers.utils import (
23
+ PIL_INTERPOLATION,
24
+ randn_tensor,
25
+ )
26
+
27
+
28
+ def preprocess(image, w, h):
29
+ if isinstance(image, torch.Tensor):
30
+ return image
31
+ elif isinstance(image, PIL.Image.Image):
32
+ image = [image]
33
+
34
+ if isinstance(image[0], PIL.Image.Image):
35
+ image = [np.array(i.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]))[None, :] for i in image]
36
+ image = np.concatenate(image, axis=0)
37
+ image = np.array(image).astype(np.float32) / 255.0
38
+ image = image.transpose(0, 3, 1, 2)
39
+ image = 2.0 * image - 1.0
40
+ image = torch.from_numpy(image)
41
+ elif isinstance(image[0], torch.Tensor):
42
+ image = torch.cat(image, dim=0)
43
+ return image
44
+
45
+
46
+ def slerp(t, v0, v1, DOT_THRESHOLD=0.9995):
47
+ if not isinstance(v0, np.ndarray):
48
+ inputs_are_torch = True
49
+ input_device = v0.device
50
+ v0 = v0.cpu().numpy()
51
+ v1 = v1.cpu().numpy()
52
+
53
+ dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
54
+ if np.abs(dot) > DOT_THRESHOLD:
55
+ v2 = (1 - t) * v0 + t * v1
56
+ else:
57
+ theta_0 = np.arccos(dot)
58
+ sin_theta_0 = np.sin(theta_0)
59
+ theta_t = theta_0 * t
60
+ sin_theta_t = np.sin(theta_t)
61
+ s0 = np.sin(theta_0 - theta_t) / sin_theta_0
62
+ s1 = sin_theta_t / sin_theta_0
63
+ v2 = s0 * v0 + s1 * v1
64
+
65
+ if inputs_are_torch:
66
+ v2 = torch.from_numpy(v2).to(input_device)
67
+
68
+ return v2
69
+
70
+
71
+ def spherical_dist_loss(x, y):
72
+ x = F.normalize(x, dim=-1)
73
+ y = F.normalize(y, dim=-1)
74
+ return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
75
+
76
+
77
+ def set_requires_grad(model, value):
78
+ for param in model.parameters():
79
+ param.requires_grad = value
80
+
81
+
82
+ class CLIPGuidedImagesMixingStableDiffusion(DiffusionPipeline):
83
+ def __init__(
84
+ self,
85
+ vae: AutoencoderKL,
86
+ text_encoder: CLIPTextModel,
87
+ clip_model: CLIPModel,
88
+ tokenizer: CLIPTokenizer,
89
+ unet: UNet2DConditionModel,
90
+ scheduler: Union[PNDMScheduler, LMSDiscreteScheduler, DDIMScheduler, DPMSolverMultistepScheduler],
91
+ feature_extractor: CLIPFeatureExtractor,
92
+ coca_model=None,
93
+ coca_tokenizer=None,
94
+ coca_transform=None,
95
+ ):
96
+ super().__init__()
97
+ self.register_modules(
98
+ vae=vae,
99
+ text_encoder=text_encoder,
100
+ clip_model=clip_model,
101
+ tokenizer=tokenizer,
102
+ unet=unet,
103
+ scheduler=scheduler,
104
+ feature_extractor=feature_extractor,
105
+ coca_model=coca_model,
106
+ coca_tokenizer=coca_tokenizer,
107
+ coca_transform=coca_transform,
108
+ )
109
+ self.feature_extractor_size = (
110
+ feature_extractor.size
111
+ if isinstance(feature_extractor.size, int)
112
+ else feature_extractor.size["shortest_edge"]
113
+ )
114
+ self.normalize = transforms.Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std)
115
+ set_requires_grad(self.text_encoder, False)
116
+ set_requires_grad(self.clip_model, False)
117
+
118
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
119
+ if slice_size == "auto":
120
+ # half the attention head size is usually a good trade-off between
121
+ # speed and memory
122
+ slice_size = self.unet.config.attention_head_dim // 2
123
+ self.unet.set_attention_slice(slice_size)
124
+
125
+ def disable_attention_slicing(self):
126
+ self.enable_attention_slicing(None)
127
+
128
+ def freeze_vae(self):
129
+ set_requires_grad(self.vae, False)
130
+
131
+ def unfreeze_vae(self):
132
+ set_requires_grad(self.vae, True)
133
+
134
+ def freeze_unet(self):
135
+ set_requires_grad(self.unet, False)
136
+
137
+ def unfreeze_unet(self):
138
+ set_requires_grad(self.unet, True)
139
+
140
+ def get_timesteps(self, num_inference_steps, strength, device):
141
+ # get the original timestep using init_timestep
142
+ init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
143
+
144
+ t_start = max(num_inference_steps - init_timestep, 0)
145
+ timesteps = self.scheduler.timesteps[t_start:]
146
+
147
+ return timesteps, num_inference_steps - t_start
148
+
149
+ def prepare_latents(self, image, timestep, batch_size, dtype, device, generator=None):
150
+ if not isinstance(image, torch.Tensor):
151
+ raise ValueError(f"`image` has to be of type `torch.Tensor` but is {type(image)}")
152
+
153
+ image = image.to(device=device, dtype=dtype)
154
+
155
+ if isinstance(generator, list):
156
+ init_latents = [
157
+ self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size)
158
+ ]
159
+ init_latents = torch.cat(init_latents, dim=0)
160
+ else:
161
+ init_latents = self.vae.encode(image).latent_dist.sample(generator)
162
+
163
+ # Hardcode 0.18215 because stable-diffusion-2-base has not self.vae.config.scaling_factor
164
+ init_latents = 0.18215 * init_latents
165
+ init_latents = init_latents.repeat_interleave(batch_size, dim=0)
166
+
167
+ noise = randn_tensor(init_latents.shape, generator=generator, device=device, dtype=dtype)
168
+
169
+ # get latents
170
+ init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
171
+ latents = init_latents
172
+
173
+ return latents
174
+
175
+ def get_image_description(self, image):
176
+ transformed_image = self.coca_transform(image).unsqueeze(0)
177
+ with torch.no_grad(), torch.cuda.amp.autocast():
178
+ generated = self.coca_model.generate(transformed_image.to(device=self.device, dtype=self.coca_model.dtype))
179
+ generated = self.coca_tokenizer.decode(generated[0].cpu().numpy())
180
+ return generated.split("<end_of_text>")[0].replace("<start_of_text>", "").rstrip(" .,")
181
+
182
+ def get_clip_image_embeddings(self, image, batch_size):
183
+ clip_image_input = self.feature_extractor.preprocess(image)
184
+ clip_image_features = torch.from_numpy(clip_image_input["pixel_values"][0]).unsqueeze(0).to(self.device).half()
185
+ image_embeddings_clip = self.clip_model.get_image_features(clip_image_features)
186
+ image_embeddings_clip = image_embeddings_clip / image_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
187
+ image_embeddings_clip = image_embeddings_clip.repeat_interleave(batch_size, dim=0)
188
+ return image_embeddings_clip
189
+
190
+ @torch.enable_grad()
191
+ def cond_fn(
192
+ self,
193
+ latents,
194
+ timestep,
195
+ index,
196
+ text_embeddings,
197
+ noise_pred_original,
198
+ original_image_embeddings_clip,
199
+ clip_guidance_scale,
200
+ ):
201
+ latents = latents.detach().requires_grad_()
202
+
203
+ latent_model_input = self.scheduler.scale_model_input(latents, timestep)
204
+
205
+ # predict the noise residual
206
+ noise_pred = self.unet(latent_model_input, timestep, encoder_hidden_states=text_embeddings).sample
207
+
208
+ if isinstance(self.scheduler, (PNDMScheduler, DDIMScheduler, DPMSolverMultistepScheduler)):
209
+ alpha_prod_t = self.scheduler.alphas_cumprod[timestep]
210
+ beta_prod_t = 1 - alpha_prod_t
211
+ # compute predicted original sample from predicted noise also called
212
+ # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
213
+ pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5)
214
+
215
+ fac = torch.sqrt(beta_prod_t)
216
+ sample = pred_original_sample * (fac) + latents * (1 - fac)
217
+ elif isinstance(self.scheduler, LMSDiscreteScheduler):
218
+ sigma = self.scheduler.sigmas[index]
219
+ sample = latents - sigma * noise_pred
220
+ else:
221
+ raise ValueError(f"scheduler type {type(self.scheduler)} not supported")
222
+
223
+ # Hardcode 0.18215 because stable-diffusion-2-base has not self.vae.config.scaling_factor
224
+ sample = 1 / 0.18215 * sample
225
+ image = self.vae.decode(sample).sample
226
+ image = (image / 2 + 0.5).clamp(0, 1)
227
+
228
+ image = transforms.Resize(self.feature_extractor_size)(image)
229
+ image = self.normalize(image).to(latents.dtype)
230
+
231
+ image_embeddings_clip = self.clip_model.get_image_features(image)
232
+ image_embeddings_clip = image_embeddings_clip / image_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
233
+
234
+ loss = spherical_dist_loss(image_embeddings_clip, original_image_embeddings_clip).mean() * clip_guidance_scale
235
+
236
+ grads = -torch.autograd.grad(loss, latents)[0]
237
+
238
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
239
+ latents = latents.detach() + grads * (sigma**2)
240
+ noise_pred = noise_pred_original
241
+ else:
242
+ noise_pred = noise_pred_original - torch.sqrt(beta_prod_t) * grads
243
+ return noise_pred, latents
244
+
245
+ @torch.no_grad()
246
+ def __call__(
247
+ self,
248
+ style_image: Union[torch.FloatTensor, PIL.Image.Image],
249
+ content_image: Union[torch.FloatTensor, PIL.Image.Image],
250
+ style_prompt: Optional[str] = None,
251
+ content_prompt: Optional[str] = None,
252
+ height: Optional[int] = 512,
253
+ width: Optional[int] = 512,
254
+ noise_strength: float = 0.6,
255
+ num_inference_steps: Optional[int] = 50,
256
+ guidance_scale: Optional[float] = 7.5,
257
+ batch_size: Optional[int] = 1,
258
+ eta: float = 0.0,
259
+ clip_guidance_scale: Optional[float] = 100,
260
+ generator: Optional[torch.Generator] = None,
261
+ output_type: Optional[str] = "pil",
262
+ return_dict: bool = True,
263
+ slerp_latent_style_strength: float = 0.8,
264
+ slerp_prompt_style_strength: float = 0.1,
265
+ slerp_clip_image_style_strength: float = 0.1,
266
+ ):
267
+ if isinstance(generator, list) and len(generator) != batch_size:
268
+ raise ValueError(f"You have passed {batch_size} batch_size, but only {len(generator)} generators.")
269
+
270
+ if height % 8 != 0 or width % 8 != 0:
271
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
272
+
273
+ if isinstance(generator, torch.Generator) and batch_size > 1:
274
+ generator = [generator] + [None] * (batch_size - 1)
275
+
276
+ coca_is_none = [
277
+ ("model", self.coca_model is None),
278
+ ("tokenizer", self.coca_tokenizer is None),
279
+ ("transform", self.coca_transform is None),
280
+ ]
281
+ coca_is_none = [x[0] for x in coca_is_none if x[1]]
282
+ coca_is_none_str = ", ".join(coca_is_none)
283
+ # generate prompts with coca model if prompt is None
284
+ if content_prompt is None:
285
+ if len(coca_is_none):
286
+ raise ValueError(
287
+ f"Content prompt is None and CoCa [{coca_is_none_str}] is None."
288
+ f"Set prompt or pass Coca [{coca_is_none_str}] to DiffusionPipeline."
289
+ )
290
+ content_prompt = self.get_image_description(content_image)
291
+ if style_prompt is None:
292
+ if len(coca_is_none):
293
+ raise ValueError(
294
+ f"Style prompt is None and CoCa [{coca_is_none_str}] is None."
295
+ f" Set prompt or pass Coca [{coca_is_none_str}] to DiffusionPipeline."
296
+ )
297
+ style_prompt = self.get_image_description(style_image)
298
+
299
+ # get prompt text embeddings for content and style
300
+ content_text_input = self.tokenizer(
301
+ content_prompt,
302
+ padding="max_length",
303
+ max_length=self.tokenizer.model_max_length,
304
+ truncation=True,
305
+ return_tensors="pt",
306
+ )
307
+ content_text_embeddings = self.text_encoder(content_text_input.input_ids.to(self.device))[0]
308
+
309
+ style_text_input = self.tokenizer(
310
+ style_prompt,
311
+ padding="max_length",
312
+ max_length=self.tokenizer.model_max_length,
313
+ truncation=True,
314
+ return_tensors="pt",
315
+ )
316
+ style_text_embeddings = self.text_encoder(style_text_input.input_ids.to(self.device))[0]
317
+
318
+ text_embeddings = slerp(slerp_prompt_style_strength, content_text_embeddings, style_text_embeddings)
319
+
320
+ # duplicate text embeddings for each generation per prompt
321
+ text_embeddings = text_embeddings.repeat_interleave(batch_size, dim=0)
322
+
323
+ # set timesteps
324
+ accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
325
+ extra_set_kwargs = {}
326
+ if accepts_offset:
327
+ extra_set_kwargs["offset"] = 1
328
+
329
+ self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
330
+ # Some schedulers like PNDM have timesteps as arrays
331
+ # It's more optimized to move all timesteps to correct device beforehand
332
+ self.scheduler.timesteps.to(self.device)
333
+
334
+ timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, noise_strength, self.device)
335
+ latent_timestep = timesteps[:1].repeat(batch_size)
336
+
337
+ # Preprocess image
338
+ preprocessed_content_image = preprocess(content_image, width, height)
339
+ content_latents = self.prepare_latents(
340
+ preprocessed_content_image, latent_timestep, batch_size, text_embeddings.dtype, self.device, generator
341
+ )
342
+
343
+ preprocessed_style_image = preprocess(style_image, width, height)
344
+ style_latents = self.prepare_latents(
345
+ preprocessed_style_image, latent_timestep, batch_size, text_embeddings.dtype, self.device, generator
346
+ )
347
+
348
+ latents = slerp(slerp_latent_style_strength, content_latents, style_latents)
349
+
350
+ if clip_guidance_scale > 0:
351
+ content_clip_image_embedding = self.get_clip_image_embeddings(content_image, batch_size)
352
+ style_clip_image_embedding = self.get_clip_image_embeddings(style_image, batch_size)
353
+ clip_image_embeddings = slerp(
354
+ slerp_clip_image_style_strength, content_clip_image_embedding, style_clip_image_embedding
355
+ )
356
+
357
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
358
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
359
+ # corresponds to doing no classifier free guidance.
360
+ do_classifier_free_guidance = guidance_scale > 1.0
361
+ # get unconditional embeddings for classifier free guidance
362
+ if do_classifier_free_guidance:
363
+ max_length = content_text_input.input_ids.shape[-1]
364
+ uncond_input = self.tokenizer([""], padding="max_length", max_length=max_length, return_tensors="pt")
365
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
366
+ # duplicate unconditional embeddings for each generation per prompt
367
+ uncond_embeddings = uncond_embeddings.repeat_interleave(batch_size, dim=0)
368
+
369
+ # For classifier free guidance, we need to do two forward passes.
370
+ # Here we concatenate the unconditional and text embeddings into a single batch
371
+ # to avoid doing two forward passes
372
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
373
+
374
+ # get the initial random noise unless the user supplied it
375
+
376
+ # Unlike in other pipelines, latents need to be generated in the target device
377
+ # for 1-to-1 results reproducibility with the CompVis implementation.
378
+ # However this currently doesn't work in `mps`.
379
+ latents_shape = (batch_size, self.unet.config.in_channels, height // 8, width // 8)
380
+ latents_dtype = text_embeddings.dtype
381
+ if latents is None:
382
+ if self.device.type == "mps":
383
+ # randn does not work reproducibly on mps
384
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
385
+ self.device
386
+ )
387
+ else:
388
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
389
+ else:
390
+ if latents.shape != latents_shape:
391
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
392
+ latents = latents.to(self.device)
393
+
394
+ # scale the initial noise by the standard deviation required by the scheduler
395
+ latents = latents * self.scheduler.init_noise_sigma
396
+
397
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
398
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
399
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
400
+ # and should be between [0, 1]
401
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
402
+ extra_step_kwargs = {}
403
+ if accepts_eta:
404
+ extra_step_kwargs["eta"] = eta
405
+
406
+ # check if the scheduler accepts generator
407
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
408
+ if accepts_generator:
409
+ extra_step_kwargs["generator"] = generator
410
+
411
+ with self.progress_bar(total=num_inference_steps):
412
+ for i, t in enumerate(timesteps):
413
+ # expand the latents if we are doing classifier free guidance
414
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
415
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
416
+
417
+ # predict the noise residual
418
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
419
+
420
+ # perform classifier free guidance
421
+ if do_classifier_free_guidance:
422
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
423
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
424
+
425
+ # perform clip guidance
426
+ if clip_guidance_scale > 0:
427
+ text_embeddings_for_guidance = (
428
+ text_embeddings.chunk(2)[1] if do_classifier_free_guidance else text_embeddings
429
+ )
430
+ noise_pred, latents = self.cond_fn(
431
+ latents,
432
+ t,
433
+ i,
434
+ text_embeddings_for_guidance,
435
+ noise_pred,
436
+ clip_image_embeddings,
437
+ clip_guidance_scale,
438
+ )
439
+
440
+ # compute the previous noisy sample x_t -> x_t-1
441
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
442
+
443
+ # Hardcode 0.18215 because stable-diffusion-2-base has not self.vae.config.scaling_factor
444
+ latents = 1 / 0.18215 * latents
445
+ image = self.vae.decode(latents).sample
446
+
447
+ image = (image / 2 + 0.5).clamp(0, 1)
448
+ image = image.cpu().permute(0, 2, 3, 1).numpy()
449
+
450
+ if output_type == "pil":
451
+ image = self.numpy_to_pil(image)
452
+
453
+ if not return_dict:
454
+ return (image, None)
455
+
456
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None)
v0.19.2/clip_guided_stable_diffusion.py ADDED
@@ -0,0 +1,347 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import List, Optional, Union
3
+
4
+ import torch
5
+ from torch import nn
6
+ from torch.nn import functional as F
7
+ from torchvision import transforms
8
+ from transformers import CLIPImageProcessor, CLIPModel, CLIPTextModel, CLIPTokenizer
9
+
10
+ from diffusers import (
11
+ AutoencoderKL,
12
+ DDIMScheduler,
13
+ DiffusionPipeline,
14
+ DPMSolverMultistepScheduler,
15
+ LMSDiscreteScheduler,
16
+ PNDMScheduler,
17
+ UNet2DConditionModel,
18
+ )
19
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
20
+
21
+
22
+ class MakeCutouts(nn.Module):
23
+ def __init__(self, cut_size, cut_power=1.0):
24
+ super().__init__()
25
+
26
+ self.cut_size = cut_size
27
+ self.cut_power = cut_power
28
+
29
+ def forward(self, pixel_values, num_cutouts):
30
+ sideY, sideX = pixel_values.shape[2:4]
31
+ max_size = min(sideX, sideY)
32
+ min_size = min(sideX, sideY, self.cut_size)
33
+ cutouts = []
34
+ for _ in range(num_cutouts):
35
+ size = int(torch.rand([]) ** self.cut_power * (max_size - min_size) + min_size)
36
+ offsetx = torch.randint(0, sideX - size + 1, ())
37
+ offsety = torch.randint(0, sideY - size + 1, ())
38
+ cutout = pixel_values[:, :, offsety : offsety + size, offsetx : offsetx + size]
39
+ cutouts.append(F.adaptive_avg_pool2d(cutout, self.cut_size))
40
+ return torch.cat(cutouts)
41
+
42
+
43
+ def spherical_dist_loss(x, y):
44
+ x = F.normalize(x, dim=-1)
45
+ y = F.normalize(y, dim=-1)
46
+ return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
47
+
48
+
49
+ def set_requires_grad(model, value):
50
+ for param in model.parameters():
51
+ param.requires_grad = value
52
+
53
+
54
+ class CLIPGuidedStableDiffusion(DiffusionPipeline):
55
+ """CLIP guided stable diffusion based on the amazing repo by @crowsonkb and @Jack000
56
+ - https://github.com/Jack000/glid-3-xl
57
+ - https://github.dev/crowsonkb/k-diffusion
58
+ """
59
+
60
+ def __init__(
61
+ self,
62
+ vae: AutoencoderKL,
63
+ text_encoder: CLIPTextModel,
64
+ clip_model: CLIPModel,
65
+ tokenizer: CLIPTokenizer,
66
+ unet: UNet2DConditionModel,
67
+ scheduler: Union[PNDMScheduler, LMSDiscreteScheduler, DDIMScheduler, DPMSolverMultistepScheduler],
68
+ feature_extractor: CLIPImageProcessor,
69
+ ):
70
+ super().__init__()
71
+ self.register_modules(
72
+ vae=vae,
73
+ text_encoder=text_encoder,
74
+ clip_model=clip_model,
75
+ tokenizer=tokenizer,
76
+ unet=unet,
77
+ scheduler=scheduler,
78
+ feature_extractor=feature_extractor,
79
+ )
80
+
81
+ self.normalize = transforms.Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std)
82
+ self.cut_out_size = (
83
+ feature_extractor.size
84
+ if isinstance(feature_extractor.size, int)
85
+ else feature_extractor.size["shortest_edge"]
86
+ )
87
+ self.make_cutouts = MakeCutouts(self.cut_out_size)
88
+
89
+ set_requires_grad(self.text_encoder, False)
90
+ set_requires_grad(self.clip_model, False)
91
+
92
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
93
+ if slice_size == "auto":
94
+ # half the attention head size is usually a good trade-off between
95
+ # speed and memory
96
+ slice_size = self.unet.config.attention_head_dim // 2
97
+ self.unet.set_attention_slice(slice_size)
98
+
99
+ def disable_attention_slicing(self):
100
+ self.enable_attention_slicing(None)
101
+
102
+ def freeze_vae(self):
103
+ set_requires_grad(self.vae, False)
104
+
105
+ def unfreeze_vae(self):
106
+ set_requires_grad(self.vae, True)
107
+
108
+ def freeze_unet(self):
109
+ set_requires_grad(self.unet, False)
110
+
111
+ def unfreeze_unet(self):
112
+ set_requires_grad(self.unet, True)
113
+
114
+ @torch.enable_grad()
115
+ def cond_fn(
116
+ self,
117
+ latents,
118
+ timestep,
119
+ index,
120
+ text_embeddings,
121
+ noise_pred_original,
122
+ text_embeddings_clip,
123
+ clip_guidance_scale,
124
+ num_cutouts,
125
+ use_cutouts=True,
126
+ ):
127
+ latents = latents.detach().requires_grad_()
128
+
129
+ latent_model_input = self.scheduler.scale_model_input(latents, timestep)
130
+
131
+ # predict the noise residual
132
+ noise_pred = self.unet(latent_model_input, timestep, encoder_hidden_states=text_embeddings).sample
133
+
134
+ if isinstance(self.scheduler, (PNDMScheduler, DDIMScheduler, DPMSolverMultistepScheduler)):
135
+ alpha_prod_t = self.scheduler.alphas_cumprod[timestep]
136
+ beta_prod_t = 1 - alpha_prod_t
137
+ # compute predicted original sample from predicted noise also called
138
+ # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
139
+ pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5)
140
+
141
+ fac = torch.sqrt(beta_prod_t)
142
+ sample = pred_original_sample * (fac) + latents * (1 - fac)
143
+ elif isinstance(self.scheduler, LMSDiscreteScheduler):
144
+ sigma = self.scheduler.sigmas[index]
145
+ sample = latents - sigma * noise_pred
146
+ else:
147
+ raise ValueError(f"scheduler type {type(self.scheduler)} not supported")
148
+
149
+ sample = 1 / self.vae.config.scaling_factor * sample
150
+ image = self.vae.decode(sample).sample
151
+ image = (image / 2 + 0.5).clamp(0, 1)
152
+
153
+ if use_cutouts:
154
+ image = self.make_cutouts(image, num_cutouts)
155
+ else:
156
+ image = transforms.Resize(self.cut_out_size)(image)
157
+ image = self.normalize(image).to(latents.dtype)
158
+
159
+ image_embeddings_clip = self.clip_model.get_image_features(image)
160
+ image_embeddings_clip = image_embeddings_clip / image_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
161
+
162
+ if use_cutouts:
163
+ dists = spherical_dist_loss(image_embeddings_clip, text_embeddings_clip)
164
+ dists = dists.view([num_cutouts, sample.shape[0], -1])
165
+ loss = dists.sum(2).mean(0).sum() * clip_guidance_scale
166
+ else:
167
+ loss = spherical_dist_loss(image_embeddings_clip, text_embeddings_clip).mean() * clip_guidance_scale
168
+
169
+ grads = -torch.autograd.grad(loss, latents)[0]
170
+
171
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
172
+ latents = latents.detach() + grads * (sigma**2)
173
+ noise_pred = noise_pred_original
174
+ else:
175
+ noise_pred = noise_pred_original - torch.sqrt(beta_prod_t) * grads
176
+ return noise_pred, latents
177
+
178
+ @torch.no_grad()
179
+ def __call__(
180
+ self,
181
+ prompt: Union[str, List[str]],
182
+ height: Optional[int] = 512,
183
+ width: Optional[int] = 512,
184
+ num_inference_steps: Optional[int] = 50,
185
+ guidance_scale: Optional[float] = 7.5,
186
+ num_images_per_prompt: Optional[int] = 1,
187
+ eta: float = 0.0,
188
+ clip_guidance_scale: Optional[float] = 100,
189
+ clip_prompt: Optional[Union[str, List[str]]] = None,
190
+ num_cutouts: Optional[int] = 4,
191
+ use_cutouts: Optional[bool] = True,
192
+ generator: Optional[torch.Generator] = None,
193
+ latents: Optional[torch.FloatTensor] = None,
194
+ output_type: Optional[str] = "pil",
195
+ return_dict: bool = True,
196
+ ):
197
+ if isinstance(prompt, str):
198
+ batch_size = 1
199
+ elif isinstance(prompt, list):
200
+ batch_size = len(prompt)
201
+ else:
202
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
203
+
204
+ if height % 8 != 0 or width % 8 != 0:
205
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
206
+
207
+ # get prompt text embeddings
208
+ text_input = self.tokenizer(
209
+ prompt,
210
+ padding="max_length",
211
+ max_length=self.tokenizer.model_max_length,
212
+ truncation=True,
213
+ return_tensors="pt",
214
+ )
215
+ text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0]
216
+ # duplicate text embeddings for each generation per prompt
217
+ text_embeddings = text_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
218
+
219
+ if clip_guidance_scale > 0:
220
+ if clip_prompt is not None:
221
+ clip_text_input = self.tokenizer(
222
+ clip_prompt,
223
+ padding="max_length",
224
+ max_length=self.tokenizer.model_max_length,
225
+ truncation=True,
226
+ return_tensors="pt",
227
+ ).input_ids.to(self.device)
228
+ else:
229
+ clip_text_input = text_input.input_ids.to(self.device)
230
+ text_embeddings_clip = self.clip_model.get_text_features(clip_text_input)
231
+ text_embeddings_clip = text_embeddings_clip / text_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
232
+ # duplicate text embeddings clip for each generation per prompt
233
+ text_embeddings_clip = text_embeddings_clip.repeat_interleave(num_images_per_prompt, dim=0)
234
+
235
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
236
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
237
+ # corresponds to doing no classifier free guidance.
238
+ do_classifier_free_guidance = guidance_scale > 1.0
239
+ # get unconditional embeddings for classifier free guidance
240
+ if do_classifier_free_guidance:
241
+ max_length = text_input.input_ids.shape[-1]
242
+ uncond_input = self.tokenizer([""], padding="max_length", max_length=max_length, return_tensors="pt")
243
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
244
+ # duplicate unconditional embeddings for each generation per prompt
245
+ uncond_embeddings = uncond_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
246
+
247
+ # For classifier free guidance, we need to do two forward passes.
248
+ # Here we concatenate the unconditional and text embeddings into a single batch
249
+ # to avoid doing two forward passes
250
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
251
+
252
+ # get the initial random noise unless the user supplied it
253
+
254
+ # Unlike in other pipelines, latents need to be generated in the target device
255
+ # for 1-to-1 results reproducibility with the CompVis implementation.
256
+ # However this currently doesn't work in `mps`.
257
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8)
258
+ latents_dtype = text_embeddings.dtype
259
+ if latents is None:
260
+ if self.device.type == "mps":
261
+ # randn does not work reproducibly on mps
262
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
263
+ self.device
264
+ )
265
+ else:
266
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
267
+ else:
268
+ if latents.shape != latents_shape:
269
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
270
+ latents = latents.to(self.device)
271
+
272
+ # set timesteps
273
+ accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
274
+ extra_set_kwargs = {}
275
+ if accepts_offset:
276
+ extra_set_kwargs["offset"] = 1
277
+
278
+ self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
279
+
280
+ # Some schedulers like PNDM have timesteps as arrays
281
+ # It's more optimized to move all timesteps to correct device beforehand
282
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
283
+
284
+ # scale the initial noise by the standard deviation required by the scheduler
285
+ latents = latents * self.scheduler.init_noise_sigma
286
+
287
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
288
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
289
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
290
+ # and should be between [0, 1]
291
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
292
+ extra_step_kwargs = {}
293
+ if accepts_eta:
294
+ extra_step_kwargs["eta"] = eta
295
+
296
+ # check if the scheduler accepts generator
297
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
298
+ if accepts_generator:
299
+ extra_step_kwargs["generator"] = generator
300
+
301
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
302
+ # expand the latents if we are doing classifier free guidance
303
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
304
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
305
+
306
+ # predict the noise residual
307
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
308
+
309
+ # perform classifier free guidance
310
+ if do_classifier_free_guidance:
311
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
312
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
313
+
314
+ # perform clip guidance
315
+ if clip_guidance_scale > 0:
316
+ text_embeddings_for_guidance = (
317
+ text_embeddings.chunk(2)[1] if do_classifier_free_guidance else text_embeddings
318
+ )
319
+ noise_pred, latents = self.cond_fn(
320
+ latents,
321
+ t,
322
+ i,
323
+ text_embeddings_for_guidance,
324
+ noise_pred,
325
+ text_embeddings_clip,
326
+ clip_guidance_scale,
327
+ num_cutouts,
328
+ use_cutouts,
329
+ )
330
+
331
+ # compute the previous noisy sample x_t -> x_t-1
332
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
333
+
334
+ # scale and decode the image latents with vae
335
+ latents = 1 / self.vae.config.scaling_factor * latents
336
+ image = self.vae.decode(latents).sample
337
+
338
+ image = (image / 2 + 0.5).clamp(0, 1)
339
+ image = image.cpu().permute(0, 2, 3, 1).numpy()
340
+
341
+ if output_type == "pil":
342
+ image = self.numpy_to_pil(image)
343
+
344
+ if not return_dict:
345
+ return (image, None)
346
+
347
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None)
v0.19.2/clip_guided_stable_diffusion_img2img.py ADDED
@@ -0,0 +1,496 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import List, Optional, Union
3
+
4
+ import numpy as np
5
+ import PIL
6
+ import torch
7
+ from torch import nn
8
+ from torch.nn import functional as F
9
+ from torchvision import transforms
10
+ from transformers import CLIPFeatureExtractor, CLIPModel, CLIPTextModel, CLIPTokenizer
11
+
12
+ from diffusers import (
13
+ AutoencoderKL,
14
+ DDIMScheduler,
15
+ DiffusionPipeline,
16
+ DPMSolverMultistepScheduler,
17
+ LMSDiscreteScheduler,
18
+ PNDMScheduler,
19
+ UNet2DConditionModel,
20
+ )
21
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
22
+ from diffusers.utils import (
23
+ PIL_INTERPOLATION,
24
+ deprecate,
25
+ randn_tensor,
26
+ )
27
+
28
+
29
+ EXAMPLE_DOC_STRING = """
30
+ Examples:
31
+ ```
32
+ from io import BytesIO
33
+
34
+ import requests
35
+ import torch
36
+ from diffusers import DiffusionPipeline
37
+ from PIL import Image
38
+ from transformers import CLIPFeatureExtractor, CLIPModel
39
+
40
+ feature_extractor = CLIPFeatureExtractor.from_pretrained(
41
+ "laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
42
+ )
43
+ clip_model = CLIPModel.from_pretrained(
44
+ "laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16
45
+ )
46
+
47
+
48
+ guided_pipeline = DiffusionPipeline.from_pretrained(
49
+ "CompVis/stable-diffusion-v1-4",
50
+ # custom_pipeline="clip_guided_stable_diffusion",
51
+ custom_pipeline="/home/njindal/diffusers/examples/community/clip_guided_stable_diffusion.py",
52
+ clip_model=clip_model,
53
+ feature_extractor=feature_extractor,
54
+ torch_dtype=torch.float16,
55
+ )
56
+ guided_pipeline.enable_attention_slicing()
57
+ guided_pipeline = guided_pipeline.to("cuda")
58
+
59
+ prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
60
+
61
+ url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
62
+
63
+ response = requests.get(url)
64
+ init_image = Image.open(BytesIO(response.content)).convert("RGB")
65
+
66
+ image = guided_pipeline(
67
+ prompt=prompt,
68
+ num_inference_steps=30,
69
+ image=init_image,
70
+ strength=0.75,
71
+ guidance_scale=7.5,
72
+ clip_guidance_scale=100,
73
+ num_cutouts=4,
74
+ use_cutouts=False,
75
+ ).images[0]
76
+ display(image)
77
+ ```
78
+ """
79
+
80
+
81
+ def preprocess(image, w, h):
82
+ if isinstance(image, torch.Tensor):
83
+ return image
84
+ elif isinstance(image, PIL.Image.Image):
85
+ image = [image]
86
+
87
+ if isinstance(image[0], PIL.Image.Image):
88
+ image = [np.array(i.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]))[None, :] for i in image]
89
+ image = np.concatenate(image, axis=0)
90
+ image = np.array(image).astype(np.float32) / 255.0
91
+ image = image.transpose(0, 3, 1, 2)
92
+ image = 2.0 * image - 1.0
93
+ image = torch.from_numpy(image)
94
+ elif isinstance(image[0], torch.Tensor):
95
+ image = torch.cat(image, dim=0)
96
+ return image
97
+
98
+
99
+ class MakeCutouts(nn.Module):
100
+ def __init__(self, cut_size, cut_power=1.0):
101
+ super().__init__()
102
+
103
+ self.cut_size = cut_size
104
+ self.cut_power = cut_power
105
+
106
+ def forward(self, pixel_values, num_cutouts):
107
+ sideY, sideX = pixel_values.shape[2:4]
108
+ max_size = min(sideX, sideY)
109
+ min_size = min(sideX, sideY, self.cut_size)
110
+ cutouts = []
111
+ for _ in range(num_cutouts):
112
+ size = int(torch.rand([]) ** self.cut_power * (max_size - min_size) + min_size)
113
+ offsetx = torch.randint(0, sideX - size + 1, ())
114
+ offsety = torch.randint(0, sideY - size + 1, ())
115
+ cutout = pixel_values[:, :, offsety : offsety + size, offsetx : offsetx + size]
116
+ cutouts.append(F.adaptive_avg_pool2d(cutout, self.cut_size))
117
+ return torch.cat(cutouts)
118
+
119
+
120
+ def spherical_dist_loss(x, y):
121
+ x = F.normalize(x, dim=-1)
122
+ y = F.normalize(y, dim=-1)
123
+ return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
124
+
125
+
126
+ def set_requires_grad(model, value):
127
+ for param in model.parameters():
128
+ param.requires_grad = value
129
+
130
+
131
+ class CLIPGuidedStableDiffusion(DiffusionPipeline):
132
+ """CLIP guided stable diffusion based on the amazing repo by @crowsonkb and @Jack000
133
+ - https://github.com/Jack000/glid-3-xl
134
+ - https://github.dev/crowsonkb/k-diffusion
135
+ """
136
+
137
+ def __init__(
138
+ self,
139
+ vae: AutoencoderKL,
140
+ text_encoder: CLIPTextModel,
141
+ clip_model: CLIPModel,
142
+ tokenizer: CLIPTokenizer,
143
+ unet: UNet2DConditionModel,
144
+ scheduler: Union[PNDMScheduler, LMSDiscreteScheduler, DDIMScheduler, DPMSolverMultistepScheduler],
145
+ feature_extractor: CLIPFeatureExtractor,
146
+ ):
147
+ super().__init__()
148
+ self.register_modules(
149
+ vae=vae,
150
+ text_encoder=text_encoder,
151
+ clip_model=clip_model,
152
+ tokenizer=tokenizer,
153
+ unet=unet,
154
+ scheduler=scheduler,
155
+ feature_extractor=feature_extractor,
156
+ )
157
+
158
+ self.normalize = transforms.Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std)
159
+ self.cut_out_size = (
160
+ feature_extractor.size
161
+ if isinstance(feature_extractor.size, int)
162
+ else feature_extractor.size["shortest_edge"]
163
+ )
164
+ self.make_cutouts = MakeCutouts(self.cut_out_size)
165
+
166
+ set_requires_grad(self.text_encoder, False)
167
+ set_requires_grad(self.clip_model, False)
168
+
169
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
170
+ if slice_size == "auto":
171
+ # half the attention head size is usually a good trade-off between
172
+ # speed and memory
173
+ slice_size = self.unet.config.attention_head_dim // 2
174
+ self.unet.set_attention_slice(slice_size)
175
+
176
+ def disable_attention_slicing(self):
177
+ self.enable_attention_slicing(None)
178
+
179
+ def freeze_vae(self):
180
+ set_requires_grad(self.vae, False)
181
+
182
+ def unfreeze_vae(self):
183
+ set_requires_grad(self.vae, True)
184
+
185
+ def freeze_unet(self):
186
+ set_requires_grad(self.unet, False)
187
+
188
+ def unfreeze_unet(self):
189
+ set_requires_grad(self.unet, True)
190
+
191
+ def get_timesteps(self, num_inference_steps, strength, device):
192
+ # get the original timestep using init_timestep
193
+ init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
194
+
195
+ t_start = max(num_inference_steps - init_timestep, 0)
196
+ timesteps = self.scheduler.timesteps[t_start:]
197
+
198
+ return timesteps, num_inference_steps - t_start
199
+
200
+ def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None):
201
+ if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)):
202
+ raise ValueError(
203
+ f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}"
204
+ )
205
+
206
+ image = image.to(device=device, dtype=dtype)
207
+
208
+ batch_size = batch_size * num_images_per_prompt
209
+ if isinstance(generator, list) and len(generator) != batch_size:
210
+ raise ValueError(
211
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
212
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
213
+ )
214
+
215
+ if isinstance(generator, list):
216
+ init_latents = [
217
+ self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size)
218
+ ]
219
+ init_latents = torch.cat(init_latents, dim=0)
220
+ else:
221
+ init_latents = self.vae.encode(image).latent_dist.sample(generator)
222
+
223
+ init_latents = self.vae.config.scaling_factor * init_latents
224
+
225
+ if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0:
226
+ # expand init_latents for batch_size
227
+ deprecation_message = (
228
+ f"You have passed {batch_size} text prompts (`prompt`), but only {init_latents.shape[0]} initial"
229
+ " images (`image`). Initial images are now duplicating to match the number of text prompts. Note"
230
+ " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update"
231
+ " your script to pass as many initial images as text prompts to suppress this warning."
232
+ )
233
+ deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False)
234
+ additional_image_per_prompt = batch_size // init_latents.shape[0]
235
+ init_latents = torch.cat([init_latents] * additional_image_per_prompt, dim=0)
236
+ elif batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0:
237
+ raise ValueError(
238
+ f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts."
239
+ )
240
+ else:
241
+ init_latents = torch.cat([init_latents], dim=0)
242
+
243
+ shape = init_latents.shape
244
+ noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
245
+
246
+ # get latents
247
+ init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
248
+ latents = init_latents
249
+
250
+ return latents
251
+
252
+ @torch.enable_grad()
253
+ def cond_fn(
254
+ self,
255
+ latents,
256
+ timestep,
257
+ index,
258
+ text_embeddings,
259
+ noise_pred_original,
260
+ text_embeddings_clip,
261
+ clip_guidance_scale,
262
+ num_cutouts,
263
+ use_cutouts=True,
264
+ ):
265
+ latents = latents.detach().requires_grad_()
266
+
267
+ latent_model_input = self.scheduler.scale_model_input(latents, timestep)
268
+
269
+ # predict the noise residual
270
+ noise_pred = self.unet(latent_model_input, timestep, encoder_hidden_states=text_embeddings).sample
271
+
272
+ if isinstance(self.scheduler, (PNDMScheduler, DDIMScheduler, DPMSolverMultistepScheduler)):
273
+ alpha_prod_t = self.scheduler.alphas_cumprod[timestep]
274
+ beta_prod_t = 1 - alpha_prod_t
275
+ # compute predicted original sample from predicted noise also called
276
+ # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
277
+ pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5)
278
+
279
+ fac = torch.sqrt(beta_prod_t)
280
+ sample = pred_original_sample * (fac) + latents * (1 - fac)
281
+ elif isinstance(self.scheduler, LMSDiscreteScheduler):
282
+ sigma = self.scheduler.sigmas[index]
283
+ sample = latents - sigma * noise_pred
284
+ else:
285
+ raise ValueError(f"scheduler type {type(self.scheduler)} not supported")
286
+
287
+ sample = 1 / self.vae.config.scaling_factor * sample
288
+ image = self.vae.decode(sample).sample
289
+ image = (image / 2 + 0.5).clamp(0, 1)
290
+
291
+ if use_cutouts:
292
+ image = self.make_cutouts(image, num_cutouts)
293
+ else:
294
+ image = transforms.Resize(self.cut_out_size)(image)
295
+ image = self.normalize(image).to(latents.dtype)
296
+
297
+ image_embeddings_clip = self.clip_model.get_image_features(image)
298
+ image_embeddings_clip = image_embeddings_clip / image_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
299
+
300
+ if use_cutouts:
301
+ dists = spherical_dist_loss(image_embeddings_clip, text_embeddings_clip)
302
+ dists = dists.view([num_cutouts, sample.shape[0], -1])
303
+ loss = dists.sum(2).mean(0).sum() * clip_guidance_scale
304
+ else:
305
+ loss = spherical_dist_loss(image_embeddings_clip, text_embeddings_clip).mean() * clip_guidance_scale
306
+
307
+ grads = -torch.autograd.grad(loss, latents)[0]
308
+
309
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
310
+ latents = latents.detach() + grads * (sigma**2)
311
+ noise_pred = noise_pred_original
312
+ else:
313
+ noise_pred = noise_pred_original - torch.sqrt(beta_prod_t) * grads
314
+ return noise_pred, latents
315
+
316
+ @torch.no_grad()
317
+ def __call__(
318
+ self,
319
+ prompt: Union[str, List[str]],
320
+ height: Optional[int] = 512,
321
+ width: Optional[int] = 512,
322
+ image: Union[torch.FloatTensor, PIL.Image.Image] = None,
323
+ strength: float = 0.8,
324
+ num_inference_steps: Optional[int] = 50,
325
+ guidance_scale: Optional[float] = 7.5,
326
+ num_images_per_prompt: Optional[int] = 1,
327
+ eta: float = 0.0,
328
+ clip_guidance_scale: Optional[float] = 100,
329
+ clip_prompt: Optional[Union[str, List[str]]] = None,
330
+ num_cutouts: Optional[int] = 4,
331
+ use_cutouts: Optional[bool] = True,
332
+ generator: Optional[torch.Generator] = None,
333
+ latents: Optional[torch.FloatTensor] = None,
334
+ output_type: Optional[str] = "pil",
335
+ return_dict: bool = True,
336
+ ):
337
+ if isinstance(prompt, str):
338
+ batch_size = 1
339
+ elif isinstance(prompt, list):
340
+ batch_size = len(prompt)
341
+ else:
342
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
343
+
344
+ if height % 8 != 0 or width % 8 != 0:
345
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
346
+
347
+ # get prompt text embeddings
348
+ text_input = self.tokenizer(
349
+ prompt,
350
+ padding="max_length",
351
+ max_length=self.tokenizer.model_max_length,
352
+ truncation=True,
353
+ return_tensors="pt",
354
+ )
355
+ text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0]
356
+ # duplicate text embeddings for each generation per prompt
357
+ text_embeddings = text_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
358
+
359
+ # set timesteps
360
+ accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
361
+ extra_set_kwargs = {}
362
+ if accepts_offset:
363
+ extra_set_kwargs["offset"] = 1
364
+
365
+ self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
366
+ # Some schedulers like PNDM have timesteps as arrays
367
+ # It's more optimized to move all timesteps to correct device beforehand
368
+ self.scheduler.timesteps.to(self.device)
369
+
370
+ timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, self.device)
371
+ latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
372
+
373
+ # Preprocess image
374
+ image = preprocess(image, width, height)
375
+ latents = self.prepare_latents(
376
+ image, latent_timestep, batch_size, num_images_per_prompt, text_embeddings.dtype, self.device, generator
377
+ )
378
+
379
+ if clip_guidance_scale > 0:
380
+ if clip_prompt is not None:
381
+ clip_text_input = self.tokenizer(
382
+ clip_prompt,
383
+ padding="max_length",
384
+ max_length=self.tokenizer.model_max_length,
385
+ truncation=True,
386
+ return_tensors="pt",
387
+ ).input_ids.to(self.device)
388
+ else:
389
+ clip_text_input = text_input.input_ids.to(self.device)
390
+ text_embeddings_clip = self.clip_model.get_text_features(clip_text_input)
391
+ text_embeddings_clip = text_embeddings_clip / text_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
392
+ # duplicate text embeddings clip for each generation per prompt
393
+ text_embeddings_clip = text_embeddings_clip.repeat_interleave(num_images_per_prompt, dim=0)
394
+
395
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
396
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
397
+ # corresponds to doing no classifier free guidance.
398
+ do_classifier_free_guidance = guidance_scale > 1.0
399
+ # get unconditional embeddings for classifier free guidance
400
+ if do_classifier_free_guidance:
401
+ max_length = text_input.input_ids.shape[-1]
402
+ uncond_input = self.tokenizer([""], padding="max_length", max_length=max_length, return_tensors="pt")
403
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
404
+ # duplicate unconditional embeddings for each generation per prompt
405
+ uncond_embeddings = uncond_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
406
+
407
+ # For classifier free guidance, we need to do two forward passes.
408
+ # Here we concatenate the unconditional and text embeddings into a single batch
409
+ # to avoid doing two forward passes
410
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
411
+
412
+ # get the initial random noise unless the user supplied it
413
+
414
+ # Unlike in other pipelines, latents need to be generated in the target device
415
+ # for 1-to-1 results reproducibility with the CompVis implementation.
416
+ # However this currently doesn't work in `mps`.
417
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8)
418
+ latents_dtype = text_embeddings.dtype
419
+ if latents is None:
420
+ if self.device.type == "mps":
421
+ # randn does not work reproducibly on mps
422
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
423
+ self.device
424
+ )
425
+ else:
426
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
427
+ else:
428
+ if latents.shape != latents_shape:
429
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
430
+ latents = latents.to(self.device)
431
+
432
+ # scale the initial noise by the standard deviation required by the scheduler
433
+ latents = latents * self.scheduler.init_noise_sigma
434
+
435
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
436
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
437
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
438
+ # and should be between [0, 1]
439
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
440
+ extra_step_kwargs = {}
441
+ if accepts_eta:
442
+ extra_step_kwargs["eta"] = eta
443
+
444
+ # check if the scheduler accepts generator
445
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
446
+ if accepts_generator:
447
+ extra_step_kwargs["generator"] = generator
448
+
449
+ with self.progress_bar(total=num_inference_steps):
450
+ for i, t in enumerate(timesteps):
451
+ # expand the latents if we are doing classifier free guidance
452
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
453
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
454
+
455
+ # predict the noise residual
456
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
457
+
458
+ # perform classifier free guidance
459
+ if do_classifier_free_guidance:
460
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
461
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
462
+
463
+ # perform clip guidance
464
+ if clip_guidance_scale > 0:
465
+ text_embeddings_for_guidance = (
466
+ text_embeddings.chunk(2)[1] if do_classifier_free_guidance else text_embeddings
467
+ )
468
+ noise_pred, latents = self.cond_fn(
469
+ latents,
470
+ t,
471
+ i,
472
+ text_embeddings_for_guidance,
473
+ noise_pred,
474
+ text_embeddings_clip,
475
+ clip_guidance_scale,
476
+ num_cutouts,
477
+ use_cutouts,
478
+ )
479
+
480
+ # compute the previous noisy sample x_t -> x_t-1
481
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
482
+
483
+ # scale and decode the image latents with vae
484
+ latents = 1 / self.vae.config.scaling_factor * latents
485
+ image = self.vae.decode(latents).sample
486
+
487
+ image = (image / 2 + 0.5).clamp(0, 1)
488
+ image = image.cpu().permute(0, 2, 3, 1).numpy()
489
+
490
+ if output_type == "pil":
491
+ image = self.numpy_to_pil(image)
492
+
493
+ if not return_dict:
494
+ return (image, None)
495
+
496
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None)
v0.19.2/composable_stable_diffusion.py ADDED
@@ -0,0 +1,580 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import inspect
16
+ from typing import Callable, List, Optional, Union
17
+
18
+ import torch
19
+ from packaging import version
20
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
21
+
22
+ from diffusers import DiffusionPipeline
23
+ from diffusers.configuration_utils import FrozenDict
24
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
25
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
26
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
27
+ from diffusers.schedulers import (
28
+ DDIMScheduler,
29
+ DPMSolverMultistepScheduler,
30
+ EulerAncestralDiscreteScheduler,
31
+ EulerDiscreteScheduler,
32
+ LMSDiscreteScheduler,
33
+ PNDMScheduler,
34
+ )
35
+ from diffusers.utils import deprecate, is_accelerate_available, logging
36
+
37
+
38
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
39
+
40
+
41
+ class ComposableStableDiffusionPipeline(DiffusionPipeline):
42
+ r"""
43
+ Pipeline for text-to-image generation using Stable Diffusion.
44
+
45
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
46
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
47
+
48
+ Args:
49
+ vae ([`AutoencoderKL`]):
50
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
51
+ text_encoder ([`CLIPTextModel`]):
52
+ Frozen text-encoder. Stable Diffusion uses the text portion of
53
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
54
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
55
+ tokenizer (`CLIPTokenizer`):
56
+ Tokenizer of class
57
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
58
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
59
+ scheduler ([`SchedulerMixin`]):
60
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
61
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
62
+ safety_checker ([`StableDiffusionSafetyChecker`]):
63
+ Classification module that estimates whether generated images could be considered offensive or harmful.
64
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
65
+ feature_extractor ([`CLIPImageProcessor`]):
66
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
67
+ """
68
+ _optional_components = ["safety_checker", "feature_extractor"]
69
+
70
+ def __init__(
71
+ self,
72
+ vae: AutoencoderKL,
73
+ text_encoder: CLIPTextModel,
74
+ tokenizer: CLIPTokenizer,
75
+ unet: UNet2DConditionModel,
76
+ scheduler: Union[
77
+ DDIMScheduler,
78
+ PNDMScheduler,
79
+ LMSDiscreteScheduler,
80
+ EulerDiscreteScheduler,
81
+ EulerAncestralDiscreteScheduler,
82
+ DPMSolverMultistepScheduler,
83
+ ],
84
+ safety_checker: StableDiffusionSafetyChecker,
85
+ feature_extractor: CLIPImageProcessor,
86
+ requires_safety_checker: bool = True,
87
+ ):
88
+ super().__init__()
89
+
90
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
91
+ deprecation_message = (
92
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
93
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
94
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
95
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
96
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
97
+ " file"
98
+ )
99
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
100
+ new_config = dict(scheduler.config)
101
+ new_config["steps_offset"] = 1
102
+ scheduler._internal_dict = FrozenDict(new_config)
103
+
104
+ if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
105
+ deprecation_message = (
106
+ f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
107
+ " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
108
+ " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
109
+ " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
110
+ " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
111
+ )
112
+ deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
113
+ new_config = dict(scheduler.config)
114
+ new_config["clip_sample"] = False
115
+ scheduler._internal_dict = FrozenDict(new_config)
116
+
117
+ if safety_checker is None and requires_safety_checker:
118
+ logger.warning(
119
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
120
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
121
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
122
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
123
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
124
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
125
+ )
126
+
127
+ if safety_checker is not None and feature_extractor is None:
128
+ raise ValueError(
129
+ "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
130
+ " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
131
+ )
132
+
133
+ is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
134
+ version.parse(unet.config._diffusers_version).base_version
135
+ ) < version.parse("0.9.0.dev0")
136
+ is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
137
+ if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
138
+ deprecation_message = (
139
+ "The configuration file of the unet has set the default `sample_size` to smaller than"
140
+ " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the"
141
+ " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
142
+ " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
143
+ " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
144
+ " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
145
+ " in the config might lead to incorrect results in future versions. If you have downloaded this"
146
+ " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
147
+ " the `unet/config.json` file"
148
+ )
149
+ deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
150
+ new_config = dict(unet.config)
151
+ new_config["sample_size"] = 64
152
+ unet._internal_dict = FrozenDict(new_config)
153
+
154
+ self.register_modules(
155
+ vae=vae,
156
+ text_encoder=text_encoder,
157
+ tokenizer=tokenizer,
158
+ unet=unet,
159
+ scheduler=scheduler,
160
+ safety_checker=safety_checker,
161
+ feature_extractor=feature_extractor,
162
+ )
163
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
164
+ self.register_to_config(requires_safety_checker=requires_safety_checker)
165
+
166
+ def enable_vae_slicing(self):
167
+ r"""
168
+ Enable sliced VAE decoding.
169
+
170
+ When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several
171
+ steps. This is useful to save some memory and allow larger batch sizes.
172
+ """
173
+ self.vae.enable_slicing()
174
+
175
+ def disable_vae_slicing(self):
176
+ r"""
177
+ Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to
178
+ computing decoding in one step.
179
+ """
180
+ self.vae.disable_slicing()
181
+
182
+ def enable_sequential_cpu_offload(self, gpu_id=0):
183
+ r"""
184
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
185
+ text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
186
+ `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
187
+ """
188
+ if is_accelerate_available():
189
+ from accelerate import cpu_offload
190
+ else:
191
+ raise ImportError("Please install accelerate via `pip install accelerate`")
192
+
193
+ device = torch.device(f"cuda:{gpu_id}")
194
+
195
+ for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]:
196
+ if cpu_offloaded_model is not None:
197
+ cpu_offload(cpu_offloaded_model, device)
198
+
199
+ if self.safety_checker is not None:
200
+ # TODO(Patrick) - there is currently a bug with cpu offload of nn.Parameter in accelerate
201
+ # fix by only offloading self.safety_checker for now
202
+ cpu_offload(self.safety_checker.vision_model, device)
203
+
204
+ @property
205
+ def _execution_device(self):
206
+ r"""
207
+ Returns the device on which the pipeline's models will be executed. After calling
208
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
209
+ hooks.
210
+ """
211
+ if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
212
+ return self.device
213
+ for module in self.unet.modules():
214
+ if (
215
+ hasattr(module, "_hf_hook")
216
+ and hasattr(module._hf_hook, "execution_device")
217
+ and module._hf_hook.execution_device is not None
218
+ ):
219
+ return torch.device(module._hf_hook.execution_device)
220
+ return self.device
221
+
222
+ def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):
223
+ r"""
224
+ Encodes the prompt into text encoder hidden states.
225
+
226
+ Args:
227
+ prompt (`str` or `list(int)`):
228
+ prompt to be encoded
229
+ device: (`torch.device`):
230
+ torch device
231
+ num_images_per_prompt (`int`):
232
+ number of images that should be generated per prompt
233
+ do_classifier_free_guidance (`bool`):
234
+ whether to use classifier free guidance or not
235
+ negative_prompt (`str` or `List[str]`):
236
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
237
+ if `guidance_scale` is less than `1`).
238
+ """
239
+ batch_size = len(prompt) if isinstance(prompt, list) else 1
240
+
241
+ text_inputs = self.tokenizer(
242
+ prompt,
243
+ padding="max_length",
244
+ max_length=self.tokenizer.model_max_length,
245
+ truncation=True,
246
+ return_tensors="pt",
247
+ )
248
+ text_input_ids = text_inputs.input_ids
249
+ untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
250
+
251
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids):
252
+ removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
253
+ logger.warning(
254
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
255
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
256
+ )
257
+
258
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
259
+ attention_mask = text_inputs.attention_mask.to(device)
260
+ else:
261
+ attention_mask = None
262
+
263
+ text_embeddings = self.text_encoder(
264
+ text_input_ids.to(device),
265
+ attention_mask=attention_mask,
266
+ )
267
+ text_embeddings = text_embeddings[0]
268
+
269
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
270
+ bs_embed, seq_len, _ = text_embeddings.shape
271
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
272
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
273
+
274
+ # get unconditional embeddings for classifier free guidance
275
+ if do_classifier_free_guidance:
276
+ uncond_tokens: List[str]
277
+ if negative_prompt is None:
278
+ uncond_tokens = [""] * batch_size
279
+ elif type(prompt) is not type(negative_prompt):
280
+ raise TypeError(
281
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
282
+ f" {type(prompt)}."
283
+ )
284
+ elif isinstance(negative_prompt, str):
285
+ uncond_tokens = [negative_prompt]
286
+ elif batch_size != len(negative_prompt):
287
+ raise ValueError(
288
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
289
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
290
+ " the batch size of `prompt`."
291
+ )
292
+ else:
293
+ uncond_tokens = negative_prompt
294
+
295
+ max_length = text_input_ids.shape[-1]
296
+ uncond_input = self.tokenizer(
297
+ uncond_tokens,
298
+ padding="max_length",
299
+ max_length=max_length,
300
+ truncation=True,
301
+ return_tensors="pt",
302
+ )
303
+
304
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
305
+ attention_mask = uncond_input.attention_mask.to(device)
306
+ else:
307
+ attention_mask = None
308
+
309
+ uncond_embeddings = self.text_encoder(
310
+ uncond_input.input_ids.to(device),
311
+ attention_mask=attention_mask,
312
+ )
313
+ uncond_embeddings = uncond_embeddings[0]
314
+
315
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
316
+ seq_len = uncond_embeddings.shape[1]
317
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
318
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
319
+
320
+ # For classifier free guidance, we need to do two forward passes.
321
+ # Here we concatenate the unconditional and text embeddings into a single batch
322
+ # to avoid doing two forward passes
323
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
324
+
325
+ return text_embeddings
326
+
327
+ def run_safety_checker(self, image, device, dtype):
328
+ if self.safety_checker is not None:
329
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
330
+ image, has_nsfw_concept = self.safety_checker(
331
+ images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
332
+ )
333
+ else:
334
+ has_nsfw_concept = None
335
+ return image, has_nsfw_concept
336
+
337
+ def decode_latents(self, latents):
338
+ latents = 1 / 0.18215 * latents
339
+ image = self.vae.decode(latents).sample
340
+ image = (image / 2 + 0.5).clamp(0, 1)
341
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
342
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
343
+ return image
344
+
345
+ def prepare_extra_step_kwargs(self, generator, eta):
346
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
347
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
348
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
349
+ # and should be between [0, 1]
350
+
351
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
352
+ extra_step_kwargs = {}
353
+ if accepts_eta:
354
+ extra_step_kwargs["eta"] = eta
355
+
356
+ # check if the scheduler accepts generator
357
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
358
+ if accepts_generator:
359
+ extra_step_kwargs["generator"] = generator
360
+ return extra_step_kwargs
361
+
362
+ def check_inputs(self, prompt, height, width, callback_steps):
363
+ if not isinstance(prompt, str) and not isinstance(prompt, list):
364
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
365
+
366
+ if height % 8 != 0 or width % 8 != 0:
367
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
368
+
369
+ if (callback_steps is None) or (
370
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
371
+ ):
372
+ raise ValueError(
373
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
374
+ f" {type(callback_steps)}."
375
+ )
376
+
377
+ def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
378
+ shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
379
+ if latents is None:
380
+ if device.type == "mps":
381
+ # randn does not work reproducibly on mps
382
+ latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device)
383
+ else:
384
+ latents = torch.randn(shape, generator=generator, device=device, dtype=dtype)
385
+ else:
386
+ if latents.shape != shape:
387
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
388
+ latents = latents.to(device)
389
+
390
+ # scale the initial noise by the standard deviation required by the scheduler
391
+ latents = latents * self.scheduler.init_noise_sigma
392
+ return latents
393
+
394
+ @torch.no_grad()
395
+ def __call__(
396
+ self,
397
+ prompt: Union[str, List[str]],
398
+ height: Optional[int] = None,
399
+ width: Optional[int] = None,
400
+ num_inference_steps: int = 50,
401
+ guidance_scale: float = 7.5,
402
+ negative_prompt: Optional[Union[str, List[str]]] = None,
403
+ num_images_per_prompt: Optional[int] = 1,
404
+ eta: float = 0.0,
405
+ generator: Optional[torch.Generator] = None,
406
+ latents: Optional[torch.FloatTensor] = None,
407
+ output_type: Optional[str] = "pil",
408
+ return_dict: bool = True,
409
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
410
+ callback_steps: int = 1,
411
+ weights: Optional[str] = "",
412
+ ):
413
+ r"""
414
+ Function invoked when calling the pipeline for generation.
415
+
416
+ Args:
417
+ prompt (`str` or `List[str]`):
418
+ The prompt or prompts to guide the image generation.
419
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
420
+ The height in pixels of the generated image.
421
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
422
+ The width in pixels of the generated image.
423
+ num_inference_steps (`int`, *optional*, defaults to 50):
424
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
425
+ expense of slower inference.
426
+ guidance_scale (`float`, *optional*, defaults to 7.5):
427
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
428
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
429
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
430
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
431
+ usually at the expense of lower image quality.
432
+ negative_prompt (`str` or `List[str]`, *optional*):
433
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
434
+ if `guidance_scale` is less than `1`).
435
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
436
+ The number of images to generate per prompt.
437
+ eta (`float`, *optional*, defaults to 0.0):
438
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
439
+ [`schedulers.DDIMScheduler`], will be ignored for others.
440
+ generator (`torch.Generator`, *optional*):
441
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
442
+ deterministic.
443
+ latents (`torch.FloatTensor`, *optional*):
444
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
445
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
446
+ tensor will ge generated by sampling using the supplied random `generator`.
447
+ output_type (`str`, *optional*, defaults to `"pil"`):
448
+ The output format of the generate image. Choose between
449
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
450
+ return_dict (`bool`, *optional*, defaults to `True`):
451
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
452
+ plain tuple.
453
+ callback (`Callable`, *optional*):
454
+ A function that will be called every `callback_steps` steps during inference. The function will be
455
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
456
+ callback_steps (`int`, *optional*, defaults to 1):
457
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
458
+ called at every step.
459
+
460
+ Returns:
461
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
462
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
463
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
464
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
465
+ (nsfw) content, according to the `safety_checker`.
466
+ """
467
+ # 0. Default height and width to unet
468
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
469
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
470
+
471
+ # 1. Check inputs. Raise error if not correct
472
+ self.check_inputs(prompt, height, width, callback_steps)
473
+
474
+ # 2. Define call parameters
475
+ batch_size = 1 if isinstance(prompt, str) else len(prompt)
476
+ device = self._execution_device
477
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
478
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
479
+ # corresponds to doing no classifier free guidance.
480
+ do_classifier_free_guidance = guidance_scale > 1.0
481
+
482
+ if "|" in prompt:
483
+ prompt = [x.strip() for x in prompt.split("|")]
484
+ print(f"composing {prompt}...")
485
+
486
+ if not weights:
487
+ # specify weights for prompts (excluding the unconditional score)
488
+ print("using equal positive weights (conjunction) for all prompts...")
489
+ weights = torch.tensor([guidance_scale] * len(prompt), device=self.device).reshape(-1, 1, 1, 1)
490
+ else:
491
+ # set prompt weight for each
492
+ num_prompts = len(prompt) if isinstance(prompt, list) else 1
493
+ weights = [float(w.strip()) for w in weights.split("|")]
494
+ # guidance scale as the default
495
+ if len(weights) < num_prompts:
496
+ weights.append(guidance_scale)
497
+ else:
498
+ weights = weights[:num_prompts]
499
+ assert len(weights) == len(prompt), "weights specified are not equal to the number of prompts"
500
+ weights = torch.tensor(weights, device=self.device).reshape(-1, 1, 1, 1)
501
+ else:
502
+ weights = guidance_scale
503
+
504
+ # 3. Encode input prompt
505
+ text_embeddings = self._encode_prompt(
506
+ prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
507
+ )
508
+
509
+ # 4. Prepare timesteps
510
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
511
+ timesteps = self.scheduler.timesteps
512
+
513
+ # 5. Prepare latent variables
514
+ num_channels_latents = self.unet.config.in_channels
515
+ latents = self.prepare_latents(
516
+ batch_size * num_images_per_prompt,
517
+ num_channels_latents,
518
+ height,
519
+ width,
520
+ text_embeddings.dtype,
521
+ device,
522
+ generator,
523
+ latents,
524
+ )
525
+
526
+ # composable diffusion
527
+ if isinstance(prompt, list) and batch_size == 1:
528
+ # remove extra unconditional embedding
529
+ # N = one unconditional embed + conditional embeds
530
+ text_embeddings = text_embeddings[len(prompt) - 1 :]
531
+
532
+ # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
533
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
534
+
535
+ # 7. Denoising loop
536
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
537
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
538
+ for i, t in enumerate(timesteps):
539
+ # expand the latents if we are doing classifier free guidance
540
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
541
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
542
+
543
+ # predict the noise residual
544
+ noise_pred = []
545
+ for j in range(text_embeddings.shape[0]):
546
+ noise_pred.append(
547
+ self.unet(latent_model_input[:1], t, encoder_hidden_states=text_embeddings[j : j + 1]).sample
548
+ )
549
+ noise_pred = torch.cat(noise_pred, dim=0)
550
+
551
+ # perform guidance
552
+ if do_classifier_free_guidance:
553
+ noise_pred_uncond, noise_pred_text = noise_pred[:1], noise_pred[1:]
554
+ noise_pred = noise_pred_uncond + (weights * (noise_pred_text - noise_pred_uncond)).sum(
555
+ dim=0, keepdims=True
556
+ )
557
+
558
+ # compute the previous noisy sample x_t -> x_t-1
559
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
560
+
561
+ # call the callback, if provided
562
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
563
+ progress_bar.update()
564
+ if callback is not None and i % callback_steps == 0:
565
+ callback(i, t, latents)
566
+
567
+ # 8. Post-processing
568
+ image = self.decode_latents(latents)
569
+
570
+ # 9. Run safety checker
571
+ image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype)
572
+
573
+ # 10. Convert to PIL
574
+ if output_type == "pil":
575
+ image = self.numpy_to_pil(image)
576
+
577
+ if not return_dict:
578
+ return (image, has_nsfw_concept)
579
+
580
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/ddim_noise_comparative_analysis.py ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ from typing import List, Optional, Tuple, Union
16
+
17
+ import PIL
18
+ import torch
19
+ from torchvision import transforms
20
+
21
+ from diffusers.pipeline_utils import DiffusionPipeline, ImagePipelineOutput
22
+ from diffusers.schedulers import DDIMScheduler
23
+ from diffusers.utils import randn_tensor
24
+
25
+
26
+ trans = transforms.Compose(
27
+ [
28
+ transforms.Resize((256, 256)),
29
+ transforms.ToTensor(),
30
+ transforms.Normalize([0.5], [0.5]),
31
+ ]
32
+ )
33
+
34
+
35
+ def preprocess(image):
36
+ if isinstance(image, torch.Tensor):
37
+ return image
38
+ elif isinstance(image, PIL.Image.Image):
39
+ image = [image]
40
+
41
+ image = [trans(img.convert("RGB")) for img in image]
42
+ image = torch.stack(image)
43
+ return image
44
+
45
+
46
+ class DDIMNoiseComparativeAnalysisPipeline(DiffusionPipeline):
47
+ r"""
48
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
49
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
50
+
51
+ Parameters:
52
+ unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image.
53
+ scheduler ([`SchedulerMixin`]):
54
+ A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of
55
+ [`DDPMScheduler`], or [`DDIMScheduler`].
56
+ """
57
+
58
+ def __init__(self, unet, scheduler):
59
+ super().__init__()
60
+
61
+ # make sure scheduler can always be converted to DDIM
62
+ scheduler = DDIMScheduler.from_config(scheduler.config)
63
+
64
+ self.register_modules(unet=unet, scheduler=scheduler)
65
+
66
+ def check_inputs(self, strength):
67
+ if strength < 0 or strength > 1:
68
+ raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
69
+
70
+ def get_timesteps(self, num_inference_steps, strength, device):
71
+ # get the original timestep using init_timestep
72
+ init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
73
+
74
+ t_start = max(num_inference_steps - init_timestep, 0)
75
+ timesteps = self.scheduler.timesteps[t_start:]
76
+
77
+ return timesteps, num_inference_steps - t_start
78
+
79
+ def prepare_latents(self, image, timestep, batch_size, dtype, device, generator=None):
80
+ if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)):
81
+ raise ValueError(
82
+ f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}"
83
+ )
84
+
85
+ init_latents = image.to(device=device, dtype=dtype)
86
+
87
+ if isinstance(generator, list) and len(generator) != batch_size:
88
+ raise ValueError(
89
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
90
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
91
+ )
92
+
93
+ shape = init_latents.shape
94
+ noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
95
+
96
+ # get latents
97
+ print("add noise to latents at timestep", timestep)
98
+ init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
99
+ latents = init_latents
100
+
101
+ return latents
102
+
103
+ @torch.no_grad()
104
+ def __call__(
105
+ self,
106
+ image: Union[torch.FloatTensor, PIL.Image.Image] = None,
107
+ strength: float = 0.8,
108
+ batch_size: int = 1,
109
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
110
+ eta: float = 0.0,
111
+ num_inference_steps: int = 50,
112
+ use_clipped_model_output: Optional[bool] = None,
113
+ output_type: Optional[str] = "pil",
114
+ return_dict: bool = True,
115
+ ) -> Union[ImagePipelineOutput, Tuple]:
116
+ r"""
117
+ Args:
118
+ image (`torch.FloatTensor` or `PIL.Image.Image`):
119
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
120
+ process.
121
+ strength (`float`, *optional*, defaults to 0.8):
122
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
123
+ will be used as a starting point, adding more noise to it the larger the `strength`. The number of
124
+ denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
125
+ be maximum and the denoising process will run for the full number of iterations specified in
126
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
127
+ batch_size (`int`, *optional*, defaults to 1):
128
+ The number of images to generate.
129
+ generator (`torch.Generator`, *optional*):
130
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
131
+ to make generation deterministic.
132
+ eta (`float`, *optional*, defaults to 0.0):
133
+ The eta parameter which controls the scale of the variance (0 is DDIM and 1 is one type of DDPM).
134
+ num_inference_steps (`int`, *optional*, defaults to 50):
135
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
136
+ expense of slower inference.
137
+ use_clipped_model_output (`bool`, *optional*, defaults to `None`):
138
+ if `True` or `False`, see documentation for `DDIMScheduler.step`. If `None`, nothing is passed
139
+ downstream to the scheduler. So use `None` for schedulers which don't support this argument.
140
+ output_type (`str`, *optional*, defaults to `"pil"`):
141
+ The output format of the generate image. Choose between
142
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
143
+ return_dict (`bool`, *optional*, defaults to `True`):
144
+ Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
145
+
146
+ Returns:
147
+ [`~pipelines.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if `return_dict` is
148
+ True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images.
149
+ """
150
+ # 1. Check inputs. Raise error if not correct
151
+ self.check_inputs(strength)
152
+
153
+ # 2. Preprocess image
154
+ image = preprocess(image)
155
+
156
+ # 3. set timesteps
157
+ self.scheduler.set_timesteps(num_inference_steps, device=self.device)
158
+ timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, self.device)
159
+ latent_timestep = timesteps[:1].repeat(batch_size)
160
+
161
+ # 4. Prepare latent variables
162
+ latents = self.prepare_latents(image, latent_timestep, batch_size, self.unet.dtype, self.device, generator)
163
+ image = latents
164
+
165
+ # 5. Denoising loop
166
+ for t in self.progress_bar(timesteps):
167
+ # 1. predict noise model_output
168
+ model_output = self.unet(image, t).sample
169
+
170
+ # 2. predict previous mean of image x_t-1 and add variance depending on eta
171
+ # eta corresponds to η in paper and should be between [0, 1]
172
+ # do x_t -> x_t-1
173
+ image = self.scheduler.step(
174
+ model_output,
175
+ t,
176
+ image,
177
+ eta=eta,
178
+ use_clipped_model_output=use_clipped_model_output,
179
+ generator=generator,
180
+ ).prev_sample
181
+
182
+ image = (image / 2 + 0.5).clamp(0, 1)
183
+ image = image.cpu().permute(0, 2, 3, 1).numpy()
184
+ if output_type == "pil":
185
+ image = self.numpy_to_pil(image)
186
+
187
+ if not return_dict:
188
+ return (image, latent_timestep.item())
189
+
190
+ return ImagePipelineOutput(images=image)
v0.19.2/edict_pipeline.py ADDED
@@ -0,0 +1,264 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional
2
+
3
+ import torch
4
+ from PIL import Image
5
+ from tqdm.auto import tqdm
6
+ from transformers import CLIPTextModel, CLIPTokenizer
7
+
8
+ from diffusers import AutoencoderKL, DDIMScheduler, DiffusionPipeline, UNet2DConditionModel
9
+ from diffusers.image_processor import VaeImageProcessor
10
+ from diffusers.utils import (
11
+ deprecate,
12
+ )
13
+
14
+
15
+ class EDICTPipeline(DiffusionPipeline):
16
+ def __init__(
17
+ self,
18
+ vae: AutoencoderKL,
19
+ text_encoder: CLIPTextModel,
20
+ tokenizer: CLIPTokenizer,
21
+ unet: UNet2DConditionModel,
22
+ scheduler: DDIMScheduler,
23
+ mixing_coeff: float = 0.93,
24
+ leapfrog_steps: bool = True,
25
+ ):
26
+ self.mixing_coeff = mixing_coeff
27
+ self.leapfrog_steps = leapfrog_steps
28
+
29
+ super().__init__()
30
+ self.register_modules(
31
+ vae=vae,
32
+ text_encoder=text_encoder,
33
+ tokenizer=tokenizer,
34
+ unet=unet,
35
+ scheduler=scheduler,
36
+ )
37
+
38
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
39
+ self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
40
+
41
+ def _encode_prompt(
42
+ self, prompt: str, negative_prompt: Optional[str] = None, do_classifier_free_guidance: bool = False
43
+ ):
44
+ text_inputs = self.tokenizer(
45
+ prompt,
46
+ padding="max_length",
47
+ max_length=self.tokenizer.model_max_length,
48
+ truncation=True,
49
+ return_tensors="pt",
50
+ )
51
+
52
+ prompt_embeds = self.text_encoder(text_inputs.input_ids.to(self.device)).last_hidden_state
53
+
54
+ prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=self.device)
55
+
56
+ if do_classifier_free_guidance:
57
+ uncond_tokens = "" if negative_prompt is None else negative_prompt
58
+
59
+ uncond_input = self.tokenizer(
60
+ uncond_tokens,
61
+ padding="max_length",
62
+ max_length=self.tokenizer.model_max_length,
63
+ truncation=True,
64
+ return_tensors="pt",
65
+ )
66
+
67
+ negative_prompt_embeds = self.text_encoder(uncond_input.input_ids.to(self.device)).last_hidden_state
68
+
69
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
70
+
71
+ return prompt_embeds
72
+
73
+ def denoise_mixing_layer(self, x: torch.Tensor, y: torch.Tensor):
74
+ x = self.mixing_coeff * x + (1 - self.mixing_coeff) * y
75
+ y = self.mixing_coeff * y + (1 - self.mixing_coeff) * x
76
+
77
+ return [x, y]
78
+
79
+ def noise_mixing_layer(self, x: torch.Tensor, y: torch.Tensor):
80
+ y = (y - (1 - self.mixing_coeff) * x) / self.mixing_coeff
81
+ x = (x - (1 - self.mixing_coeff) * y) / self.mixing_coeff
82
+
83
+ return [x, y]
84
+
85
+ def _get_alpha_and_beta(self, t: torch.Tensor):
86
+ # as self.alphas_cumprod is always in cpu
87
+ t = int(t)
88
+
89
+ alpha_prod = self.scheduler.alphas_cumprod[t] if t >= 0 else self.scheduler.final_alpha_cumprod
90
+
91
+ return alpha_prod, 1 - alpha_prod
92
+
93
+ def noise_step(
94
+ self,
95
+ base: torch.Tensor,
96
+ model_input: torch.Tensor,
97
+ model_output: torch.Tensor,
98
+ timestep: torch.Tensor,
99
+ ):
100
+ prev_timestep = timestep - self.scheduler.config.num_train_timesteps / self.scheduler.num_inference_steps
101
+
102
+ alpha_prod_t, beta_prod_t = self._get_alpha_and_beta(timestep)
103
+ alpha_prod_t_prev, beta_prod_t_prev = self._get_alpha_and_beta(prev_timestep)
104
+
105
+ a_t = (alpha_prod_t_prev / alpha_prod_t) ** 0.5
106
+ b_t = -a_t * (beta_prod_t**0.5) + beta_prod_t_prev**0.5
107
+
108
+ next_model_input = (base - b_t * model_output) / a_t
109
+
110
+ return model_input, next_model_input.to(base.dtype)
111
+
112
+ def denoise_step(
113
+ self,
114
+ base: torch.Tensor,
115
+ model_input: torch.Tensor,
116
+ model_output: torch.Tensor,
117
+ timestep: torch.Tensor,
118
+ ):
119
+ prev_timestep = timestep - self.scheduler.config.num_train_timesteps / self.scheduler.num_inference_steps
120
+
121
+ alpha_prod_t, beta_prod_t = self._get_alpha_and_beta(timestep)
122
+ alpha_prod_t_prev, beta_prod_t_prev = self._get_alpha_and_beta(prev_timestep)
123
+
124
+ a_t = (alpha_prod_t_prev / alpha_prod_t) ** 0.5
125
+ b_t = -a_t * (beta_prod_t**0.5) + beta_prod_t_prev**0.5
126
+ next_model_input = a_t * base + b_t * model_output
127
+
128
+ return model_input, next_model_input.to(base.dtype)
129
+
130
+ @torch.no_grad()
131
+ def decode_latents(self, latents: torch.Tensor):
132
+ latents = 1 / self.vae.config.scaling_factor * latents
133
+ image = self.vae.decode(latents).sample
134
+ image = (image / 2 + 0.5).clamp(0, 1)
135
+ return image
136
+
137
+ @torch.no_grad()
138
+ def prepare_latents(
139
+ self,
140
+ image: Image.Image,
141
+ text_embeds: torch.Tensor,
142
+ timesteps: torch.Tensor,
143
+ guidance_scale: float,
144
+ generator: Optional[torch.Generator] = None,
145
+ ):
146
+ do_classifier_free_guidance = guidance_scale > 1.0
147
+
148
+ image = image.to(device=self.device, dtype=text_embeds.dtype)
149
+ latent = self.vae.encode(image).latent_dist.sample(generator)
150
+
151
+ latent = self.vae.config.scaling_factor * latent
152
+
153
+ coupled_latents = [latent.clone(), latent.clone()]
154
+
155
+ for i, t in tqdm(enumerate(timesteps), total=len(timesteps)):
156
+ coupled_latents = self.noise_mixing_layer(x=coupled_latents[0], y=coupled_latents[1])
157
+
158
+ # j - model_input index, k - base index
159
+ for j in range(2):
160
+ k = j ^ 1
161
+
162
+ if self.leapfrog_steps:
163
+ if i % 2 == 0:
164
+ k, j = j, k
165
+
166
+ model_input = coupled_latents[j]
167
+ base = coupled_latents[k]
168
+
169
+ latent_model_input = torch.cat([model_input] * 2) if do_classifier_free_guidance else model_input
170
+
171
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeds).sample
172
+
173
+ if do_classifier_free_guidance:
174
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
175
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
176
+
177
+ base, model_input = self.noise_step(
178
+ base=base,
179
+ model_input=model_input,
180
+ model_output=noise_pred,
181
+ timestep=t,
182
+ )
183
+
184
+ coupled_latents[k] = model_input
185
+
186
+ return coupled_latents
187
+
188
+ @torch.no_grad()
189
+ def __call__(
190
+ self,
191
+ base_prompt: str,
192
+ target_prompt: str,
193
+ image: Image.Image,
194
+ guidance_scale: float = 3.0,
195
+ num_inference_steps: int = 50,
196
+ strength: float = 0.8,
197
+ negative_prompt: Optional[str] = None,
198
+ generator: Optional[torch.Generator] = None,
199
+ output_type: Optional[str] = "pil",
200
+ ):
201
+ do_classifier_free_guidance = guidance_scale > 1.0
202
+
203
+ image = self.image_processor.preprocess(image)
204
+
205
+ base_embeds = self._encode_prompt(base_prompt, negative_prompt, do_classifier_free_guidance)
206
+ target_embeds = self._encode_prompt(target_prompt, negative_prompt, do_classifier_free_guidance)
207
+
208
+ self.scheduler.set_timesteps(num_inference_steps, self.device)
209
+
210
+ t_limit = num_inference_steps - int(num_inference_steps * strength)
211
+ fwd_timesteps = self.scheduler.timesteps[t_limit:]
212
+ bwd_timesteps = fwd_timesteps.flip(0)
213
+
214
+ coupled_latents = self.prepare_latents(image, base_embeds, bwd_timesteps, guidance_scale, generator)
215
+
216
+ for i, t in tqdm(enumerate(fwd_timesteps), total=len(fwd_timesteps)):
217
+ # j - model_input index, k - base index
218
+ for k in range(2):
219
+ j = k ^ 1
220
+
221
+ if self.leapfrog_steps:
222
+ if i % 2 == 1:
223
+ k, j = j, k
224
+
225
+ model_input = coupled_latents[j]
226
+ base = coupled_latents[k]
227
+
228
+ latent_model_input = torch.cat([model_input] * 2) if do_classifier_free_guidance else model_input
229
+
230
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=target_embeds).sample
231
+
232
+ if do_classifier_free_guidance:
233
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
234
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
235
+
236
+ base, model_input = self.denoise_step(
237
+ base=base,
238
+ model_input=model_input,
239
+ model_output=noise_pred,
240
+ timestep=t,
241
+ )
242
+
243
+ coupled_latents[k] = model_input
244
+
245
+ coupled_latents = self.denoise_mixing_layer(x=coupled_latents[0], y=coupled_latents[1])
246
+
247
+ # either one is fine
248
+ final_latent = coupled_latents[0]
249
+
250
+ if output_type not in ["latent", "pt", "np", "pil"]:
251
+ deprecation_message = (
252
+ f"the output_type {output_type} is outdated. Please make sure to set it to one of these instead: "
253
+ "`pil`, `np`, `pt`, `latent`"
254
+ )
255
+ deprecate("Unsupported output_type", "1.0.0", deprecation_message, standard_warn=False)
256
+ output_type = "np"
257
+
258
+ if output_type == "latent":
259
+ image = final_latent
260
+ else:
261
+ image = self.decode_latents(final_latent)
262
+ image = self.image_processor.postprocess(image, output_type=output_type)
263
+
264
+ return image
v0.19.2/iadb.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List, Optional, Tuple, Union
2
+
3
+ import torch
4
+
5
+ from diffusers import DiffusionPipeline
6
+ from diffusers.configuration_utils import ConfigMixin
7
+ from diffusers.pipeline_utils import ImagePipelineOutput
8
+ from diffusers.schedulers.scheduling_utils import SchedulerMixin
9
+
10
+
11
+ class IADBScheduler(SchedulerMixin, ConfigMixin):
12
+ """
13
+ IADBScheduler is a scheduler for the Iterative α-(de)Blending denoising method. It is simple and minimalist.
14
+
15
+ For more details, see the original paper: https://arxiv.org/abs/2305.03486 and the blog post: https://ggx-research.github.io/publication/2023/05/10/publication-iadb.html
16
+ """
17
+
18
+ def step(
19
+ self,
20
+ model_output: torch.FloatTensor,
21
+ timestep: int,
22
+ x_alpha: torch.FloatTensor,
23
+ ) -> torch.FloatTensor:
24
+ """
25
+ Predict the sample at the previous timestep by reversing the ODE. Core function to propagate the diffusion
26
+ process from the learned model outputs (most often the predicted noise).
27
+
28
+ Args:
29
+ model_output (`torch.FloatTensor`): direct output from learned diffusion model. It is the direction from x0 to x1.
30
+ timestep (`float`): current timestep in the diffusion chain.
31
+ x_alpha (`torch.FloatTensor`): x_alpha sample for the current timestep
32
+
33
+ Returns:
34
+ `torch.FloatTensor`: the sample at the previous timestep
35
+
36
+ """
37
+ if self.num_inference_steps is None:
38
+ raise ValueError(
39
+ "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
40
+ )
41
+
42
+ alpha = timestep / self.num_inference_steps
43
+ alpha_next = (timestep + 1) / self.num_inference_steps
44
+
45
+ d = model_output
46
+
47
+ x_alpha = x_alpha + (alpha_next - alpha) * d
48
+
49
+ return x_alpha
50
+
51
+ def set_timesteps(self, num_inference_steps: int):
52
+ self.num_inference_steps = num_inference_steps
53
+
54
+ def add_noise(
55
+ self,
56
+ original_samples: torch.FloatTensor,
57
+ noise: torch.FloatTensor,
58
+ alpha: torch.FloatTensor,
59
+ ) -> torch.FloatTensor:
60
+ return original_samples * alpha + noise * (1 - alpha)
61
+
62
+ def __len__(self):
63
+ return self.config.num_train_timesteps
64
+
65
+
66
+ class IADBPipeline(DiffusionPipeline):
67
+ r"""
68
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
69
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
70
+
71
+ Parameters:
72
+ unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image.
73
+ scheduler ([`SchedulerMixin`]):
74
+ A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of
75
+ [`DDPMScheduler`], or [`DDIMScheduler`].
76
+ """
77
+
78
+ def __init__(self, unet, scheduler):
79
+ super().__init__()
80
+
81
+ self.register_modules(unet=unet, scheduler=scheduler)
82
+
83
+ @torch.no_grad()
84
+ def __call__(
85
+ self,
86
+ batch_size: int = 1,
87
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
88
+ num_inference_steps: int = 50,
89
+ output_type: Optional[str] = "pil",
90
+ return_dict: bool = True,
91
+ ) -> Union[ImagePipelineOutput, Tuple]:
92
+ r"""
93
+ Args:
94
+ batch_size (`int`, *optional*, defaults to 1):
95
+ The number of images to generate.
96
+ num_inference_steps (`int`, *optional*, defaults to 50):
97
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
98
+ expense of slower inference.
99
+ output_type (`str`, *optional*, defaults to `"pil"`):
100
+ The output format of the generate image. Choose between
101
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
102
+ return_dict (`bool`, *optional*, defaults to `True`):
103
+ Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
104
+
105
+ Returns:
106
+ [`~pipelines.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if `return_dict` is
107
+ True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images.
108
+ """
109
+
110
+ # Sample gaussian noise to begin loop
111
+ if isinstance(self.unet.config.sample_size, int):
112
+ image_shape = (
113
+ batch_size,
114
+ self.unet.config.in_channels,
115
+ self.unet.config.sample_size,
116
+ self.unet.config.sample_size,
117
+ )
118
+ else:
119
+ image_shape = (batch_size, self.unet.config.in_channels, *self.unet.config.sample_size)
120
+
121
+ if isinstance(generator, list) and len(generator) != batch_size:
122
+ raise ValueError(
123
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
124
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
125
+ )
126
+
127
+ image = torch.randn(image_shape, generator=generator, device=self.device, dtype=self.unet.dtype)
128
+
129
+ # set step values
130
+ self.scheduler.set_timesteps(num_inference_steps)
131
+ x_alpha = image.clone()
132
+ for t in self.progress_bar(range(num_inference_steps)):
133
+ alpha = t / num_inference_steps
134
+
135
+ # 1. predict noise model_output
136
+ model_output = self.unet(x_alpha, torch.tensor(alpha, device=x_alpha.device)).sample
137
+
138
+ # 2. step
139
+ x_alpha = self.scheduler.step(model_output, t, x_alpha)
140
+
141
+ image = (x_alpha * 0.5 + 0.5).clamp(0, 1)
142
+ image = image.cpu().permute(0, 2, 3, 1).numpy()
143
+ if output_type == "pil":
144
+ image = self.numpy_to_pil(image)
145
+
146
+ if not return_dict:
147
+ return (image,)
148
+
149
+ return ImagePipelineOutput(images=image)
v0.19.2/imagic_stable_diffusion.py ADDED
@@ -0,0 +1,496 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ modeled after the textual_inversion.py / train_dreambooth.py and the work
3
+ of justinpinkney here: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb
4
+ """
5
+ import inspect
6
+ import warnings
7
+ from typing import List, Optional, Union
8
+
9
+ import numpy as np
10
+ import PIL
11
+ import torch
12
+ import torch.nn.functional as F
13
+ from accelerate import Accelerator
14
+
15
+ # TODO: remove and import from diffusers.utils when the new version of diffusers is released
16
+ from packaging import version
17
+ from tqdm.auto import tqdm
18
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
19
+
20
+ from diffusers import DiffusionPipeline
21
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
22
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
23
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
24
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
25
+ from diffusers.utils import logging
26
+
27
+
28
+ if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
29
+ PIL_INTERPOLATION = {
30
+ "linear": PIL.Image.Resampling.BILINEAR,
31
+ "bilinear": PIL.Image.Resampling.BILINEAR,
32
+ "bicubic": PIL.Image.Resampling.BICUBIC,
33
+ "lanczos": PIL.Image.Resampling.LANCZOS,
34
+ "nearest": PIL.Image.Resampling.NEAREST,
35
+ }
36
+ else:
37
+ PIL_INTERPOLATION = {
38
+ "linear": PIL.Image.LINEAR,
39
+ "bilinear": PIL.Image.BILINEAR,
40
+ "bicubic": PIL.Image.BICUBIC,
41
+ "lanczos": PIL.Image.LANCZOS,
42
+ "nearest": PIL.Image.NEAREST,
43
+ }
44
+ # ------------------------------------------------------------------------------
45
+
46
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
47
+
48
+
49
+ def preprocess(image):
50
+ w, h = image.size
51
+ w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
52
+ image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
53
+ image = np.array(image).astype(np.float32) / 255.0
54
+ image = image[None].transpose(0, 3, 1, 2)
55
+ image = torch.from_numpy(image)
56
+ return 2.0 * image - 1.0
57
+
58
+
59
+ class ImagicStableDiffusionPipeline(DiffusionPipeline):
60
+ r"""
61
+ Pipeline for imagic image editing.
62
+ See paper here: https://arxiv.org/pdf/2210.09276.pdf
63
+
64
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
65
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
66
+ Args:
67
+ vae ([`AutoencoderKL`]):
68
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
69
+ text_encoder ([`CLIPTextModel`]):
70
+ Frozen text-encoder. Stable Diffusion uses the text portion of
71
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
72
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
73
+ tokenizer (`CLIPTokenizer`):
74
+ Tokenizer of class
75
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
76
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
77
+ scheduler ([`SchedulerMixin`]):
78
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
79
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
80
+ safety_checker ([`StableDiffusionSafetyChecker`]):
81
+ Classification module that estimates whether generated images could be considered offsensive or harmful.
82
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
83
+ feature_extractor ([`CLIPImageProcessor`]):
84
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
85
+ """
86
+
87
+ def __init__(
88
+ self,
89
+ vae: AutoencoderKL,
90
+ text_encoder: CLIPTextModel,
91
+ tokenizer: CLIPTokenizer,
92
+ unet: UNet2DConditionModel,
93
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
94
+ safety_checker: StableDiffusionSafetyChecker,
95
+ feature_extractor: CLIPImageProcessor,
96
+ ):
97
+ super().__init__()
98
+ self.register_modules(
99
+ vae=vae,
100
+ text_encoder=text_encoder,
101
+ tokenizer=tokenizer,
102
+ unet=unet,
103
+ scheduler=scheduler,
104
+ safety_checker=safety_checker,
105
+ feature_extractor=feature_extractor,
106
+ )
107
+
108
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
109
+ r"""
110
+ Enable sliced attention computation.
111
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
112
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
113
+ Args:
114
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
115
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
116
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
117
+ `attention_head_dim` must be a multiple of `slice_size`.
118
+ """
119
+ if slice_size == "auto":
120
+ # half the attention head size is usually a good trade-off between
121
+ # speed and memory
122
+ slice_size = self.unet.config.attention_head_dim // 2
123
+ self.unet.set_attention_slice(slice_size)
124
+
125
+ def disable_attention_slicing(self):
126
+ r"""
127
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
128
+ back to computing attention in one step.
129
+ """
130
+ # set slice_size = `None` to disable `attention slicing`
131
+ self.enable_attention_slicing(None)
132
+
133
+ def train(
134
+ self,
135
+ prompt: Union[str, List[str]],
136
+ image: Union[torch.FloatTensor, PIL.Image.Image],
137
+ height: Optional[int] = 512,
138
+ width: Optional[int] = 512,
139
+ generator: Optional[torch.Generator] = None,
140
+ embedding_learning_rate: float = 0.001,
141
+ diffusion_model_learning_rate: float = 2e-6,
142
+ text_embedding_optimization_steps: int = 500,
143
+ model_fine_tuning_optimization_steps: int = 1000,
144
+ **kwargs,
145
+ ):
146
+ r"""
147
+ Function invoked when calling the pipeline for generation.
148
+ Args:
149
+ prompt (`str` or `List[str]`):
150
+ The prompt or prompts to guide the image generation.
151
+ height (`int`, *optional*, defaults to 512):
152
+ The height in pixels of the generated image.
153
+ width (`int`, *optional*, defaults to 512):
154
+ The width in pixels of the generated image.
155
+ num_inference_steps (`int`, *optional*, defaults to 50):
156
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
157
+ expense of slower inference.
158
+ guidance_scale (`float`, *optional*, defaults to 7.5):
159
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
160
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
161
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
162
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
163
+ usually at the expense of lower image quality.
164
+ eta (`float`, *optional*, defaults to 0.0):
165
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
166
+ [`schedulers.DDIMScheduler`], will be ignored for others.
167
+ generator (`torch.Generator`, *optional*):
168
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
169
+ deterministic.
170
+ latents (`torch.FloatTensor`, *optional*):
171
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
172
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
173
+ tensor will ge generated by sampling using the supplied random `generator`.
174
+ output_type (`str`, *optional*, defaults to `"pil"`):
175
+ The output format of the generate image. Choose between
176
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
177
+ return_dict (`bool`, *optional*, defaults to `True`):
178
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
179
+ plain tuple.
180
+ Returns:
181
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
182
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
183
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
184
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
185
+ (nsfw) content, according to the `safety_checker`.
186
+ """
187
+ accelerator = Accelerator(
188
+ gradient_accumulation_steps=1,
189
+ mixed_precision="fp16",
190
+ )
191
+
192
+ if "torch_device" in kwargs:
193
+ device = kwargs.pop("torch_device")
194
+ warnings.warn(
195
+ "`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0."
196
+ " Consider using `pipe.to(torch_device)` instead."
197
+ )
198
+
199
+ if device is None:
200
+ device = "cuda" if torch.cuda.is_available() else "cpu"
201
+ self.to(device)
202
+
203
+ if height % 8 != 0 or width % 8 != 0:
204
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
205
+
206
+ # Freeze vae and unet
207
+ self.vae.requires_grad_(False)
208
+ self.unet.requires_grad_(False)
209
+ self.text_encoder.requires_grad_(False)
210
+ self.unet.eval()
211
+ self.vae.eval()
212
+ self.text_encoder.eval()
213
+
214
+ if accelerator.is_main_process:
215
+ accelerator.init_trackers(
216
+ "imagic",
217
+ config={
218
+ "embedding_learning_rate": embedding_learning_rate,
219
+ "text_embedding_optimization_steps": text_embedding_optimization_steps,
220
+ },
221
+ )
222
+
223
+ # get text embeddings for prompt
224
+ text_input = self.tokenizer(
225
+ prompt,
226
+ padding="max_length",
227
+ max_length=self.tokenizer.model_max_length,
228
+ truncation=True,
229
+ return_tensors="pt",
230
+ )
231
+ text_embeddings = torch.nn.Parameter(
232
+ self.text_encoder(text_input.input_ids.to(self.device))[0], requires_grad=True
233
+ )
234
+ text_embeddings = text_embeddings.detach()
235
+ text_embeddings.requires_grad_()
236
+ text_embeddings_orig = text_embeddings.clone()
237
+
238
+ # Initialize the optimizer
239
+ optimizer = torch.optim.Adam(
240
+ [text_embeddings], # only optimize the embeddings
241
+ lr=embedding_learning_rate,
242
+ )
243
+
244
+ if isinstance(image, PIL.Image.Image):
245
+ image = preprocess(image)
246
+
247
+ latents_dtype = text_embeddings.dtype
248
+ image = image.to(device=self.device, dtype=latents_dtype)
249
+ init_latent_image_dist = self.vae.encode(image).latent_dist
250
+ image_latents = init_latent_image_dist.sample(generator=generator)
251
+ image_latents = 0.18215 * image_latents
252
+
253
+ progress_bar = tqdm(range(text_embedding_optimization_steps), disable=not accelerator.is_local_main_process)
254
+ progress_bar.set_description("Steps")
255
+
256
+ global_step = 0
257
+
258
+ logger.info("First optimizing the text embedding to better reconstruct the init image")
259
+ for _ in range(text_embedding_optimization_steps):
260
+ with accelerator.accumulate(text_embeddings):
261
+ # Sample noise that we'll add to the latents
262
+ noise = torch.randn(image_latents.shape).to(image_latents.device)
263
+ timesteps = torch.randint(1000, (1,), device=image_latents.device)
264
+
265
+ # Add noise to the latents according to the noise magnitude at each timestep
266
+ # (this is the forward diffusion process)
267
+ noisy_latents = self.scheduler.add_noise(image_latents, noise, timesteps)
268
+
269
+ # Predict the noise residual
270
+ noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample
271
+
272
+ loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
273
+ accelerator.backward(loss)
274
+
275
+ optimizer.step()
276
+ optimizer.zero_grad()
277
+
278
+ # Checks if the accelerator has performed an optimization step behind the scenes
279
+ if accelerator.sync_gradients:
280
+ progress_bar.update(1)
281
+ global_step += 1
282
+
283
+ logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]}
284
+ progress_bar.set_postfix(**logs)
285
+ accelerator.log(logs, step=global_step)
286
+
287
+ accelerator.wait_for_everyone()
288
+
289
+ text_embeddings.requires_grad_(False)
290
+
291
+ # Now we fine tune the unet to better reconstruct the image
292
+ self.unet.requires_grad_(True)
293
+ self.unet.train()
294
+ optimizer = torch.optim.Adam(
295
+ self.unet.parameters(), # only optimize unet
296
+ lr=diffusion_model_learning_rate,
297
+ )
298
+ progress_bar = tqdm(range(model_fine_tuning_optimization_steps), disable=not accelerator.is_local_main_process)
299
+
300
+ logger.info("Next fine tuning the entire model to better reconstruct the init image")
301
+ for _ in range(model_fine_tuning_optimization_steps):
302
+ with accelerator.accumulate(self.unet.parameters()):
303
+ # Sample noise that we'll add to the latents
304
+ noise = torch.randn(image_latents.shape).to(image_latents.device)
305
+ timesteps = torch.randint(1000, (1,), device=image_latents.device)
306
+
307
+ # Add noise to the latents according to the noise magnitude at each timestep
308
+ # (this is the forward diffusion process)
309
+ noisy_latents = self.scheduler.add_noise(image_latents, noise, timesteps)
310
+
311
+ # Predict the noise residual
312
+ noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample
313
+
314
+ loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
315
+ accelerator.backward(loss)
316
+
317
+ optimizer.step()
318
+ optimizer.zero_grad()
319
+
320
+ # Checks if the accelerator has performed an optimization step behind the scenes
321
+ if accelerator.sync_gradients:
322
+ progress_bar.update(1)
323
+ global_step += 1
324
+
325
+ logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]}
326
+ progress_bar.set_postfix(**logs)
327
+ accelerator.log(logs, step=global_step)
328
+
329
+ accelerator.wait_for_everyone()
330
+ self.text_embeddings_orig = text_embeddings_orig
331
+ self.text_embeddings = text_embeddings
332
+
333
+ @torch.no_grad()
334
+ def __call__(
335
+ self,
336
+ alpha: float = 1.2,
337
+ height: Optional[int] = 512,
338
+ width: Optional[int] = 512,
339
+ num_inference_steps: Optional[int] = 50,
340
+ generator: Optional[torch.Generator] = None,
341
+ output_type: Optional[str] = "pil",
342
+ return_dict: bool = True,
343
+ guidance_scale: float = 7.5,
344
+ eta: float = 0.0,
345
+ ):
346
+ r"""
347
+ Function invoked when calling the pipeline for generation.
348
+ Args:
349
+ prompt (`str` or `List[str]`):
350
+ The prompt or prompts to guide the image generation.
351
+ height (`int`, *optional*, defaults to 512):
352
+ The height in pixels of the generated image.
353
+ width (`int`, *optional*, defaults to 512):
354
+ The width in pixels of the generated image.
355
+ num_inference_steps (`int`, *optional*, defaults to 50):
356
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
357
+ expense of slower inference.
358
+ guidance_scale (`float`, *optional*, defaults to 7.5):
359
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
360
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
361
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
362
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
363
+ usually at the expense of lower image quality.
364
+ eta (`float`, *optional*, defaults to 0.0):
365
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
366
+ [`schedulers.DDIMScheduler`], will be ignored for others.
367
+ generator (`torch.Generator`, *optional*):
368
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
369
+ deterministic.
370
+ latents (`torch.FloatTensor`, *optional*):
371
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
372
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
373
+ tensor will ge generated by sampling using the supplied random `generator`.
374
+ output_type (`str`, *optional*, defaults to `"pil"`):
375
+ The output format of the generate image. Choose between
376
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
377
+ return_dict (`bool`, *optional*, defaults to `True`):
378
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
379
+ plain tuple.
380
+ Returns:
381
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
382
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
383
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
384
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
385
+ (nsfw) content, according to the `safety_checker`.
386
+ """
387
+ if height % 8 != 0 or width % 8 != 0:
388
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
389
+ if self.text_embeddings is None:
390
+ raise ValueError("Please run the pipe.train() before trying to generate an image.")
391
+ if self.text_embeddings_orig is None:
392
+ raise ValueError("Please run the pipe.train() before trying to generate an image.")
393
+
394
+ text_embeddings = alpha * self.text_embeddings_orig + (1 - alpha) * self.text_embeddings
395
+
396
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
397
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
398
+ # corresponds to doing no classifier free guidance.
399
+ do_classifier_free_guidance = guidance_scale > 1.0
400
+ # get unconditional embeddings for classifier free guidance
401
+ if do_classifier_free_guidance:
402
+ uncond_tokens = [""]
403
+ max_length = self.tokenizer.model_max_length
404
+ uncond_input = self.tokenizer(
405
+ uncond_tokens,
406
+ padding="max_length",
407
+ max_length=max_length,
408
+ truncation=True,
409
+ return_tensors="pt",
410
+ )
411
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
412
+
413
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
414
+ seq_len = uncond_embeddings.shape[1]
415
+ uncond_embeddings = uncond_embeddings.view(1, seq_len, -1)
416
+
417
+ # For classifier free guidance, we need to do two forward passes.
418
+ # Here we concatenate the unconditional and text embeddings into a single batch
419
+ # to avoid doing two forward passes
420
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
421
+
422
+ # get the initial random noise unless the user supplied it
423
+
424
+ # Unlike in other pipelines, latents need to be generated in the target device
425
+ # for 1-to-1 results reproducibility with the CompVis implementation.
426
+ # However this currently doesn't work in `mps`.
427
+ latents_shape = (1, self.unet.config.in_channels, height // 8, width // 8)
428
+ latents_dtype = text_embeddings.dtype
429
+ if self.device.type == "mps":
430
+ # randn does not exist on mps
431
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
432
+ self.device
433
+ )
434
+ else:
435
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
436
+
437
+ # set timesteps
438
+ self.scheduler.set_timesteps(num_inference_steps)
439
+
440
+ # Some schedulers like PNDM have timesteps as arrays
441
+ # It's more optimized to move all timesteps to correct device beforehand
442
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
443
+
444
+ # scale the initial noise by the standard deviation required by the scheduler
445
+ latents = latents * self.scheduler.init_noise_sigma
446
+
447
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
448
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
449
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
450
+ # and should be between [0, 1]
451
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
452
+ extra_step_kwargs = {}
453
+ if accepts_eta:
454
+ extra_step_kwargs["eta"] = eta
455
+
456
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
457
+ # expand the latents if we are doing classifier free guidance
458
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
459
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
460
+
461
+ # predict the noise residual
462
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
463
+
464
+ # perform guidance
465
+ if do_classifier_free_guidance:
466
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
467
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
468
+
469
+ # compute the previous noisy sample x_t -> x_t-1
470
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
471
+
472
+ latents = 1 / 0.18215 * latents
473
+ image = self.vae.decode(latents).sample
474
+
475
+ image = (image / 2 + 0.5).clamp(0, 1)
476
+
477
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
478
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
479
+
480
+ if self.safety_checker is not None:
481
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
482
+ self.device
483
+ )
484
+ image, has_nsfw_concept = self.safety_checker(
485
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
486
+ )
487
+ else:
488
+ has_nsfw_concept = None
489
+
490
+ if output_type == "pil":
491
+ image = self.numpy_to_pil(image)
492
+
493
+ if not return_dict:
494
+ return (image, has_nsfw_concept)
495
+
496
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/img2img_inpainting.py ADDED
@@ -0,0 +1,463 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import Callable, List, Optional, Tuple, Union
3
+
4
+ import numpy as np
5
+ import PIL
6
+ import torch
7
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
8
+
9
+ from diffusers import DiffusionPipeline
10
+ from diffusers.configuration_utils import FrozenDict
11
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
12
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
13
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
14
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
15
+ from diffusers.utils import deprecate, logging
16
+
17
+
18
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
19
+
20
+
21
+ def prepare_mask_and_masked_image(image, mask):
22
+ image = np.array(image.convert("RGB"))
23
+ image = image[None].transpose(0, 3, 1, 2)
24
+ image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0
25
+
26
+ mask = np.array(mask.convert("L"))
27
+ mask = mask.astype(np.float32) / 255.0
28
+ mask = mask[None, None]
29
+ mask[mask < 0.5] = 0
30
+ mask[mask >= 0.5] = 1
31
+ mask = torch.from_numpy(mask)
32
+
33
+ masked_image = image * (mask < 0.5)
34
+
35
+ return mask, masked_image
36
+
37
+
38
+ def check_size(image, height, width):
39
+ if isinstance(image, PIL.Image.Image):
40
+ w, h = image.size
41
+ elif isinstance(image, torch.Tensor):
42
+ *_, h, w = image.shape
43
+
44
+ if h != height or w != width:
45
+ raise ValueError(f"Image size should be {height}x{width}, but got {h}x{w}")
46
+
47
+
48
+ def overlay_inner_image(image, inner_image, paste_offset: Tuple[int] = (0, 0)):
49
+ inner_image = inner_image.convert("RGBA")
50
+ image = image.convert("RGB")
51
+
52
+ image.paste(inner_image, paste_offset, inner_image)
53
+ image = image.convert("RGB")
54
+
55
+ return image
56
+
57
+
58
+ class ImageToImageInpaintingPipeline(DiffusionPipeline):
59
+ r"""
60
+ Pipeline for text-guided image-to-image inpainting using Stable Diffusion. *This is an experimental feature*.
61
+
62
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
63
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
64
+
65
+ Args:
66
+ vae ([`AutoencoderKL`]):
67
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
68
+ text_encoder ([`CLIPTextModel`]):
69
+ Frozen text-encoder. Stable Diffusion uses the text portion of
70
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
71
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
72
+ tokenizer (`CLIPTokenizer`):
73
+ Tokenizer of class
74
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
75
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
76
+ scheduler ([`SchedulerMixin`]):
77
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
78
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
79
+ safety_checker ([`StableDiffusionSafetyChecker`]):
80
+ Classification module that estimates whether generated images could be considered offensive or harmful.
81
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
82
+ feature_extractor ([`CLIPImageProcessor`]):
83
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
84
+ """
85
+
86
+ def __init__(
87
+ self,
88
+ vae: AutoencoderKL,
89
+ text_encoder: CLIPTextModel,
90
+ tokenizer: CLIPTokenizer,
91
+ unet: UNet2DConditionModel,
92
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
93
+ safety_checker: StableDiffusionSafetyChecker,
94
+ feature_extractor: CLIPImageProcessor,
95
+ ):
96
+ super().__init__()
97
+
98
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
99
+ deprecation_message = (
100
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
101
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
102
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
103
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
104
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
105
+ " file"
106
+ )
107
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
108
+ new_config = dict(scheduler.config)
109
+ new_config["steps_offset"] = 1
110
+ scheduler._internal_dict = FrozenDict(new_config)
111
+
112
+ if safety_checker is None:
113
+ logger.warning(
114
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
115
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
116
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
117
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
118
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
119
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
120
+ )
121
+
122
+ self.register_modules(
123
+ vae=vae,
124
+ text_encoder=text_encoder,
125
+ tokenizer=tokenizer,
126
+ unet=unet,
127
+ scheduler=scheduler,
128
+ safety_checker=safety_checker,
129
+ feature_extractor=feature_extractor,
130
+ )
131
+
132
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
133
+ r"""
134
+ Enable sliced attention computation.
135
+
136
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
137
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
138
+
139
+ Args:
140
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
141
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
142
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
143
+ `attention_head_dim` must be a multiple of `slice_size`.
144
+ """
145
+ if slice_size == "auto":
146
+ # half the attention head size is usually a good trade-off between
147
+ # speed and memory
148
+ slice_size = self.unet.config.attention_head_dim // 2
149
+ self.unet.set_attention_slice(slice_size)
150
+
151
+ def disable_attention_slicing(self):
152
+ r"""
153
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
154
+ back to computing attention in one step.
155
+ """
156
+ # set slice_size = `None` to disable `attention slicing`
157
+ self.enable_attention_slicing(None)
158
+
159
+ @torch.no_grad()
160
+ def __call__(
161
+ self,
162
+ prompt: Union[str, List[str]],
163
+ image: Union[torch.FloatTensor, PIL.Image.Image],
164
+ inner_image: Union[torch.FloatTensor, PIL.Image.Image],
165
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image],
166
+ height: int = 512,
167
+ width: int = 512,
168
+ num_inference_steps: int = 50,
169
+ guidance_scale: float = 7.5,
170
+ negative_prompt: Optional[Union[str, List[str]]] = None,
171
+ num_images_per_prompt: Optional[int] = 1,
172
+ eta: float = 0.0,
173
+ generator: Optional[torch.Generator] = None,
174
+ latents: Optional[torch.FloatTensor] = None,
175
+ output_type: Optional[str] = "pil",
176
+ return_dict: bool = True,
177
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
178
+ callback_steps: int = 1,
179
+ **kwargs,
180
+ ):
181
+ r"""
182
+ Function invoked when calling the pipeline for generation.
183
+
184
+ Args:
185
+ prompt (`str` or `List[str]`):
186
+ The prompt or prompts to guide the image generation.
187
+ image (`torch.Tensor` or `PIL.Image.Image`):
188
+ `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
189
+ be masked out with `mask_image` and repainted according to `prompt`.
190
+ inner_image (`torch.Tensor` or `PIL.Image.Image`):
191
+ `Image`, or tensor representing an image batch which will be overlayed onto `image`. Non-transparent
192
+ regions of `inner_image` must fit inside white pixels in `mask_image`. Expects four channels, with
193
+ the last channel representing the alpha channel, which will be used to blend `inner_image` with
194
+ `image`. If not provided, it will be forcibly cast to RGBA.
195
+ mask_image (`PIL.Image.Image`):
196
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
197
+ repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
198
+ to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
199
+ instead of 3, so the expected shape would be `(B, H, W, 1)`.
200
+ height (`int`, *optional*, defaults to 512):
201
+ The height in pixels of the generated image.
202
+ width (`int`, *optional*, defaults to 512):
203
+ The width in pixels of the generated image.
204
+ num_inference_steps (`int`, *optional*, defaults to 50):
205
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
206
+ expense of slower inference.
207
+ guidance_scale (`float`, *optional*, defaults to 7.5):
208
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
209
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
210
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
211
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
212
+ usually at the expense of lower image quality.
213
+ negative_prompt (`str` or `List[str]`, *optional*):
214
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
215
+ if `guidance_scale` is less than `1`).
216
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
217
+ The number of images to generate per prompt.
218
+ eta (`float`, *optional*, defaults to 0.0):
219
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
220
+ [`schedulers.DDIMScheduler`], will be ignored for others.
221
+ generator (`torch.Generator`, *optional*):
222
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
223
+ deterministic.
224
+ latents (`torch.FloatTensor`, *optional*):
225
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
226
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
227
+ tensor will ge generated by sampling using the supplied random `generator`.
228
+ output_type (`str`, *optional*, defaults to `"pil"`):
229
+ The output format of the generate image. Choose between
230
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
231
+ return_dict (`bool`, *optional*, defaults to `True`):
232
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
233
+ plain tuple.
234
+ callback (`Callable`, *optional*):
235
+ A function that will be called every `callback_steps` steps during inference. The function will be
236
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
237
+ callback_steps (`int`, *optional*, defaults to 1):
238
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
239
+ called at every step.
240
+
241
+ Returns:
242
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
243
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
244
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
245
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
246
+ (nsfw) content, according to the `safety_checker`.
247
+ """
248
+
249
+ if isinstance(prompt, str):
250
+ batch_size = 1
251
+ elif isinstance(prompt, list):
252
+ batch_size = len(prompt)
253
+ else:
254
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
255
+
256
+ if height % 8 != 0 or width % 8 != 0:
257
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
258
+
259
+ if (callback_steps is None) or (
260
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
261
+ ):
262
+ raise ValueError(
263
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
264
+ f" {type(callback_steps)}."
265
+ )
266
+
267
+ # check if input sizes are correct
268
+ check_size(image, height, width)
269
+ check_size(inner_image, height, width)
270
+ check_size(mask_image, height, width)
271
+
272
+ # get prompt text embeddings
273
+ text_inputs = self.tokenizer(
274
+ prompt,
275
+ padding="max_length",
276
+ max_length=self.tokenizer.model_max_length,
277
+ return_tensors="pt",
278
+ )
279
+ text_input_ids = text_inputs.input_ids
280
+
281
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
282
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
283
+ logger.warning(
284
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
285
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
286
+ )
287
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
288
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
289
+
290
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
291
+ bs_embed, seq_len, _ = text_embeddings.shape
292
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
293
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
294
+
295
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
296
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
297
+ # corresponds to doing no classifier free guidance.
298
+ do_classifier_free_guidance = guidance_scale > 1.0
299
+ # get unconditional embeddings for classifier free guidance
300
+ if do_classifier_free_guidance:
301
+ uncond_tokens: List[str]
302
+ if negative_prompt is None:
303
+ uncond_tokens = [""]
304
+ elif type(prompt) is not type(negative_prompt):
305
+ raise TypeError(
306
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
307
+ f" {type(prompt)}."
308
+ )
309
+ elif isinstance(negative_prompt, str):
310
+ uncond_tokens = [negative_prompt]
311
+ elif batch_size != len(negative_prompt):
312
+ raise ValueError(
313
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
314
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
315
+ " the batch size of `prompt`."
316
+ )
317
+ else:
318
+ uncond_tokens = negative_prompt
319
+
320
+ max_length = text_input_ids.shape[-1]
321
+ uncond_input = self.tokenizer(
322
+ uncond_tokens,
323
+ padding="max_length",
324
+ max_length=max_length,
325
+ truncation=True,
326
+ return_tensors="pt",
327
+ )
328
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
329
+
330
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
331
+ seq_len = uncond_embeddings.shape[1]
332
+ uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1)
333
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
334
+
335
+ # For classifier free guidance, we need to do two forward passes.
336
+ # Here we concatenate the unconditional and text embeddings into a single batch
337
+ # to avoid doing two forward passes
338
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
339
+
340
+ # get the initial random noise unless the user supplied it
341
+ # Unlike in other pipelines, latents need to be generated in the target device
342
+ # for 1-to-1 results reproducibility with the CompVis implementation.
343
+ # However this currently doesn't work in `mps`.
344
+ num_channels_latents = self.vae.config.latent_channels
345
+ latents_shape = (batch_size * num_images_per_prompt, num_channels_latents, height // 8, width // 8)
346
+ latents_dtype = text_embeddings.dtype
347
+ if latents is None:
348
+ if self.device.type == "mps":
349
+ # randn does not exist on mps
350
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
351
+ self.device
352
+ )
353
+ else:
354
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
355
+ else:
356
+ if latents.shape != latents_shape:
357
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
358
+ latents = latents.to(self.device)
359
+
360
+ # overlay the inner image
361
+ image = overlay_inner_image(image, inner_image)
362
+
363
+ # prepare mask and masked_image
364
+ mask, masked_image = prepare_mask_and_masked_image(image, mask_image)
365
+ mask = mask.to(device=self.device, dtype=text_embeddings.dtype)
366
+ masked_image = masked_image.to(device=self.device, dtype=text_embeddings.dtype)
367
+
368
+ # resize the mask to latents shape as we concatenate the mask to the latents
369
+ mask = torch.nn.functional.interpolate(mask, size=(height // 8, width // 8))
370
+
371
+ # encode the mask image into latents space so we can concatenate it to the latents
372
+ masked_image_latents = self.vae.encode(masked_image).latent_dist.sample(generator=generator)
373
+ masked_image_latents = 0.18215 * masked_image_latents
374
+
375
+ # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method
376
+ mask = mask.repeat(batch_size * num_images_per_prompt, 1, 1, 1)
377
+ masked_image_latents = masked_image_latents.repeat(batch_size * num_images_per_prompt, 1, 1, 1)
378
+
379
+ mask = torch.cat([mask] * 2) if do_classifier_free_guidance else mask
380
+ masked_image_latents = (
381
+ torch.cat([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents
382
+ )
383
+
384
+ num_channels_mask = mask.shape[1]
385
+ num_channels_masked_image = masked_image_latents.shape[1]
386
+
387
+ if num_channels_latents + num_channels_mask + num_channels_masked_image != self.unet.config.in_channels:
388
+ raise ValueError(
389
+ f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects"
390
+ f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +"
391
+ f" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}"
392
+ f" = {num_channels_latents+num_channels_masked_image+num_channels_mask}. Please verify the config of"
393
+ " `pipeline.unet` or your `mask_image` or `image` input."
394
+ )
395
+
396
+ # set timesteps
397
+ self.scheduler.set_timesteps(num_inference_steps)
398
+
399
+ # Some schedulers like PNDM have timesteps as arrays
400
+ # It's more optimized to move all timesteps to correct device beforehand
401
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
402
+
403
+ # scale the initial noise by the standard deviation required by the scheduler
404
+ latents = latents * self.scheduler.init_noise_sigma
405
+
406
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
407
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
408
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
409
+ # and should be between [0, 1]
410
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
411
+ extra_step_kwargs = {}
412
+ if accepts_eta:
413
+ extra_step_kwargs["eta"] = eta
414
+
415
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
416
+ # expand the latents if we are doing classifier free guidance
417
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
418
+
419
+ # concat latents, mask, masked_image_latents in the channel dimension
420
+ latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)
421
+
422
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
423
+
424
+ # predict the noise residual
425
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
426
+
427
+ # perform guidance
428
+ if do_classifier_free_guidance:
429
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
430
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
431
+
432
+ # compute the previous noisy sample x_t -> x_t-1
433
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
434
+
435
+ # call the callback, if provided
436
+ if callback is not None and i % callback_steps == 0:
437
+ callback(i, t, latents)
438
+
439
+ latents = 1 / 0.18215 * latents
440
+ image = self.vae.decode(latents).sample
441
+
442
+ image = (image / 2 + 0.5).clamp(0, 1)
443
+
444
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
445
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
446
+
447
+ if self.safety_checker is not None:
448
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
449
+ self.device
450
+ )
451
+ image, has_nsfw_concept = self.safety_checker(
452
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
453
+ )
454
+ else:
455
+ has_nsfw_concept = None
456
+
457
+ if output_type == "pil":
458
+ image = self.numpy_to_pil(image)
459
+
460
+ if not return_dict:
461
+ return (image, has_nsfw_concept)
462
+
463
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/interpolate_stable_diffusion.py ADDED
@@ -0,0 +1,524 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import time
3
+ from pathlib import Path
4
+ from typing import Callable, List, Optional, Union
5
+
6
+ import numpy as np
7
+ import torch
8
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
9
+
10
+ from diffusers import DiffusionPipeline
11
+ from diffusers.configuration_utils import FrozenDict
12
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
13
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
14
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
15
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
16
+ from diffusers.utils import deprecate, logging
17
+
18
+
19
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
20
+
21
+
22
+ def slerp(t, v0, v1, DOT_THRESHOLD=0.9995):
23
+ """helper function to spherically interpolate two arrays v1 v2"""
24
+
25
+ if not isinstance(v0, np.ndarray):
26
+ inputs_are_torch = True
27
+ input_device = v0.device
28
+ v0 = v0.cpu().numpy()
29
+ v1 = v1.cpu().numpy()
30
+
31
+ dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
32
+ if np.abs(dot) > DOT_THRESHOLD:
33
+ v2 = (1 - t) * v0 + t * v1
34
+ else:
35
+ theta_0 = np.arccos(dot)
36
+ sin_theta_0 = np.sin(theta_0)
37
+ theta_t = theta_0 * t
38
+ sin_theta_t = np.sin(theta_t)
39
+ s0 = np.sin(theta_0 - theta_t) / sin_theta_0
40
+ s1 = sin_theta_t / sin_theta_0
41
+ v2 = s0 * v0 + s1 * v1
42
+
43
+ if inputs_are_torch:
44
+ v2 = torch.from_numpy(v2).to(input_device)
45
+
46
+ return v2
47
+
48
+
49
+ class StableDiffusionWalkPipeline(DiffusionPipeline):
50
+ r"""
51
+ Pipeline for text-to-image generation using Stable Diffusion.
52
+
53
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
54
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
55
+
56
+ Args:
57
+ vae ([`AutoencoderKL`]):
58
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
59
+ text_encoder ([`CLIPTextModel`]):
60
+ Frozen text-encoder. Stable Diffusion uses the text portion of
61
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
62
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
63
+ tokenizer (`CLIPTokenizer`):
64
+ Tokenizer of class
65
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
66
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
67
+ scheduler ([`SchedulerMixin`]):
68
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
69
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
70
+ safety_checker ([`StableDiffusionSafetyChecker`]):
71
+ Classification module that estimates whether generated images could be considered offensive or harmful.
72
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
73
+ feature_extractor ([`CLIPImageProcessor`]):
74
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
75
+ """
76
+
77
+ def __init__(
78
+ self,
79
+ vae: AutoencoderKL,
80
+ text_encoder: CLIPTextModel,
81
+ tokenizer: CLIPTokenizer,
82
+ unet: UNet2DConditionModel,
83
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
84
+ safety_checker: StableDiffusionSafetyChecker,
85
+ feature_extractor: CLIPImageProcessor,
86
+ ):
87
+ super().__init__()
88
+
89
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
90
+ deprecation_message = (
91
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
92
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
93
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
94
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
95
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
96
+ " file"
97
+ )
98
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
99
+ new_config = dict(scheduler.config)
100
+ new_config["steps_offset"] = 1
101
+ scheduler._internal_dict = FrozenDict(new_config)
102
+
103
+ if safety_checker is None:
104
+ logger.warning(
105
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
106
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
107
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
108
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
109
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
110
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
111
+ )
112
+
113
+ self.register_modules(
114
+ vae=vae,
115
+ text_encoder=text_encoder,
116
+ tokenizer=tokenizer,
117
+ unet=unet,
118
+ scheduler=scheduler,
119
+ safety_checker=safety_checker,
120
+ feature_extractor=feature_extractor,
121
+ )
122
+
123
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
124
+ r"""
125
+ Enable sliced attention computation.
126
+
127
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
128
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
129
+
130
+ Args:
131
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
132
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
133
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
134
+ `attention_head_dim` must be a multiple of `slice_size`.
135
+ """
136
+ if slice_size == "auto":
137
+ # half the attention head size is usually a good trade-off between
138
+ # speed and memory
139
+ slice_size = self.unet.config.attention_head_dim // 2
140
+ self.unet.set_attention_slice(slice_size)
141
+
142
+ def disable_attention_slicing(self):
143
+ r"""
144
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
145
+ back to computing attention in one step.
146
+ """
147
+ # set slice_size = `None` to disable `attention slicing`
148
+ self.enable_attention_slicing(None)
149
+
150
+ @torch.no_grad()
151
+ def __call__(
152
+ self,
153
+ prompt: Optional[Union[str, List[str]]] = None,
154
+ height: int = 512,
155
+ width: int = 512,
156
+ num_inference_steps: int = 50,
157
+ guidance_scale: float = 7.5,
158
+ negative_prompt: Optional[Union[str, List[str]]] = None,
159
+ num_images_per_prompt: Optional[int] = 1,
160
+ eta: float = 0.0,
161
+ generator: Optional[torch.Generator] = None,
162
+ latents: Optional[torch.FloatTensor] = None,
163
+ output_type: Optional[str] = "pil",
164
+ return_dict: bool = True,
165
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
166
+ callback_steps: int = 1,
167
+ text_embeddings: Optional[torch.FloatTensor] = None,
168
+ **kwargs,
169
+ ):
170
+ r"""
171
+ Function invoked when calling the pipeline for generation.
172
+
173
+ Args:
174
+ prompt (`str` or `List[str]`, *optional*, defaults to `None`):
175
+ The prompt or prompts to guide the image generation. If not provided, `text_embeddings` is required.
176
+ height (`int`, *optional*, defaults to 512):
177
+ The height in pixels of the generated image.
178
+ width (`int`, *optional*, defaults to 512):
179
+ The width in pixels of the generated image.
180
+ num_inference_steps (`int`, *optional*, defaults to 50):
181
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
182
+ expense of slower inference.
183
+ guidance_scale (`float`, *optional*, defaults to 7.5):
184
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
185
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
186
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
187
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
188
+ usually at the expense of lower image quality.
189
+ negative_prompt (`str` or `List[str]`, *optional*):
190
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
191
+ if `guidance_scale` is less than `1`).
192
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
193
+ The number of images to generate per prompt.
194
+ eta (`float`, *optional*, defaults to 0.0):
195
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
196
+ [`schedulers.DDIMScheduler`], will be ignored for others.
197
+ generator (`torch.Generator`, *optional*):
198
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
199
+ deterministic.
200
+ latents (`torch.FloatTensor`, *optional*):
201
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
202
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
203
+ tensor will ge generated by sampling using the supplied random `generator`.
204
+ output_type (`str`, *optional*, defaults to `"pil"`):
205
+ The output format of the generate image. Choose between
206
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
207
+ return_dict (`bool`, *optional*, defaults to `True`):
208
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
209
+ plain tuple.
210
+ callback (`Callable`, *optional*):
211
+ A function that will be called every `callback_steps` steps during inference. The function will be
212
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
213
+ callback_steps (`int`, *optional*, defaults to 1):
214
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
215
+ called at every step.
216
+ text_embeddings (`torch.FloatTensor`, *optional*, defaults to `None`):
217
+ Pre-generated text embeddings to be used as inputs for image generation. Can be used in place of
218
+ `prompt` to avoid re-computing the embeddings. If not provided, the embeddings will be generated from
219
+ the supplied `prompt`.
220
+
221
+ Returns:
222
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
223
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
224
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
225
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
226
+ (nsfw) content, according to the `safety_checker`.
227
+ """
228
+
229
+ if height % 8 != 0 or width % 8 != 0:
230
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
231
+
232
+ if (callback_steps is None) or (
233
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
234
+ ):
235
+ raise ValueError(
236
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
237
+ f" {type(callback_steps)}."
238
+ )
239
+
240
+ if text_embeddings is None:
241
+ if isinstance(prompt, str):
242
+ batch_size = 1
243
+ elif isinstance(prompt, list):
244
+ batch_size = len(prompt)
245
+ else:
246
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
247
+
248
+ # get prompt text embeddings
249
+ text_inputs = self.tokenizer(
250
+ prompt,
251
+ padding="max_length",
252
+ max_length=self.tokenizer.model_max_length,
253
+ return_tensors="pt",
254
+ )
255
+ text_input_ids = text_inputs.input_ids
256
+
257
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
258
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
259
+ print(
260
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
261
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
262
+ )
263
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
264
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
265
+ else:
266
+ batch_size = text_embeddings.shape[0]
267
+
268
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
269
+ bs_embed, seq_len, _ = text_embeddings.shape
270
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
271
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
272
+
273
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
274
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
275
+ # corresponds to doing no classifier free guidance.
276
+ do_classifier_free_guidance = guidance_scale > 1.0
277
+ # get unconditional embeddings for classifier free guidance
278
+ if do_classifier_free_guidance:
279
+ uncond_tokens: List[str]
280
+ if negative_prompt is None:
281
+ uncond_tokens = [""] * batch_size
282
+ elif type(prompt) is not type(negative_prompt):
283
+ raise TypeError(
284
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
285
+ f" {type(prompt)}."
286
+ )
287
+ elif isinstance(negative_prompt, str):
288
+ uncond_tokens = [negative_prompt]
289
+ elif batch_size != len(negative_prompt):
290
+ raise ValueError(
291
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
292
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
293
+ " the batch size of `prompt`."
294
+ )
295
+ else:
296
+ uncond_tokens = negative_prompt
297
+
298
+ max_length = self.tokenizer.model_max_length
299
+ uncond_input = self.tokenizer(
300
+ uncond_tokens,
301
+ padding="max_length",
302
+ max_length=max_length,
303
+ truncation=True,
304
+ return_tensors="pt",
305
+ )
306
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
307
+
308
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
309
+ seq_len = uncond_embeddings.shape[1]
310
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
311
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
312
+
313
+ # For classifier free guidance, we need to do two forward passes.
314
+ # Here we concatenate the unconditional and text embeddings into a single batch
315
+ # to avoid doing two forward passes
316
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
317
+
318
+ # get the initial random noise unless the user supplied it
319
+
320
+ # Unlike in other pipelines, latents need to be generated in the target device
321
+ # for 1-to-1 results reproducibility with the CompVis implementation.
322
+ # However this currently doesn't work in `mps`.
323
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8)
324
+ latents_dtype = text_embeddings.dtype
325
+ if latents is None:
326
+ if self.device.type == "mps":
327
+ # randn does not work reproducibly on mps
328
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
329
+ self.device
330
+ )
331
+ else:
332
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
333
+ else:
334
+ if latents.shape != latents_shape:
335
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
336
+ latents = latents.to(self.device)
337
+
338
+ # set timesteps
339
+ self.scheduler.set_timesteps(num_inference_steps)
340
+
341
+ # Some schedulers like PNDM have timesteps as arrays
342
+ # It's more optimized to move all timesteps to correct device beforehand
343
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
344
+
345
+ # scale the initial noise by the standard deviation required by the scheduler
346
+ latents = latents * self.scheduler.init_noise_sigma
347
+
348
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
349
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
350
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
351
+ # and should be between [0, 1]
352
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
353
+ extra_step_kwargs = {}
354
+ if accepts_eta:
355
+ extra_step_kwargs["eta"] = eta
356
+
357
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
358
+ # expand the latents if we are doing classifier free guidance
359
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
360
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
361
+
362
+ # predict the noise residual
363
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
364
+
365
+ # perform guidance
366
+ if do_classifier_free_guidance:
367
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
368
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
369
+
370
+ # compute the previous noisy sample x_t -> x_t-1
371
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
372
+
373
+ # call the callback, if provided
374
+ if callback is not None and i % callback_steps == 0:
375
+ callback(i, t, latents)
376
+
377
+ latents = 1 / 0.18215 * latents
378
+ image = self.vae.decode(latents).sample
379
+
380
+ image = (image / 2 + 0.5).clamp(0, 1)
381
+
382
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
383
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
384
+
385
+ if self.safety_checker is not None:
386
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
387
+ self.device
388
+ )
389
+ image, has_nsfw_concept = self.safety_checker(
390
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
391
+ )
392
+ else:
393
+ has_nsfw_concept = None
394
+
395
+ if output_type == "pil":
396
+ image = self.numpy_to_pil(image)
397
+
398
+ if not return_dict:
399
+ return (image, has_nsfw_concept)
400
+
401
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
402
+
403
+ def embed_text(self, text):
404
+ """takes in text and turns it into text embeddings"""
405
+ text_input = self.tokenizer(
406
+ text,
407
+ padding="max_length",
408
+ max_length=self.tokenizer.model_max_length,
409
+ truncation=True,
410
+ return_tensors="pt",
411
+ )
412
+ with torch.no_grad():
413
+ embed = self.text_encoder(text_input.input_ids.to(self.device))[0]
414
+ return embed
415
+
416
+ def get_noise(self, seed, dtype=torch.float32, height=512, width=512):
417
+ """Takes in random seed and returns corresponding noise vector"""
418
+ return torch.randn(
419
+ (1, self.unet.config.in_channels, height // 8, width // 8),
420
+ generator=torch.Generator(device=self.device).manual_seed(seed),
421
+ device=self.device,
422
+ dtype=dtype,
423
+ )
424
+
425
+ def walk(
426
+ self,
427
+ prompts: List[str],
428
+ seeds: List[int],
429
+ num_interpolation_steps: Optional[int] = 6,
430
+ output_dir: Optional[str] = "./dreams",
431
+ name: Optional[str] = None,
432
+ batch_size: Optional[int] = 1,
433
+ height: Optional[int] = 512,
434
+ width: Optional[int] = 512,
435
+ guidance_scale: Optional[float] = 7.5,
436
+ num_inference_steps: Optional[int] = 50,
437
+ eta: Optional[float] = 0.0,
438
+ ) -> List[str]:
439
+ """
440
+ Walks through a series of prompts and seeds, interpolating between them and saving the results to disk.
441
+
442
+ Args:
443
+ prompts (`List[str]`):
444
+ List of prompts to generate images for.
445
+ seeds (`List[int]`):
446
+ List of seeds corresponding to provided prompts. Must be the same length as prompts.
447
+ num_interpolation_steps (`int`, *optional*, defaults to 6):
448
+ Number of interpolation steps to take between prompts.
449
+ output_dir (`str`, *optional*, defaults to `./dreams`):
450
+ Directory to save the generated images to.
451
+ name (`str`, *optional*, defaults to `None`):
452
+ Subdirectory of `output_dir` to save the generated images to. If `None`, the name will
453
+ be the current time.
454
+ batch_size (`int`, *optional*, defaults to 1):
455
+ Number of images to generate at once.
456
+ height (`int`, *optional*, defaults to 512):
457
+ Height of the generated images.
458
+ width (`int`, *optional*, defaults to 512):
459
+ Width of the generated images.
460
+ guidance_scale (`float`, *optional*, defaults to 7.5):
461
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
462
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
463
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
464
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
465
+ usually at the expense of lower image quality.
466
+ num_inference_steps (`int`, *optional*, defaults to 50):
467
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
468
+ expense of slower inference.
469
+ eta (`float`, *optional*, defaults to 0.0):
470
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
471
+ [`schedulers.DDIMScheduler`], will be ignored for others.
472
+
473
+ Returns:
474
+ `List[str]`: List of paths to the generated images.
475
+ """
476
+ if not len(prompts) == len(seeds):
477
+ raise ValueError(
478
+ f"Number of prompts and seeds must be equalGot {len(prompts)} prompts and {len(seeds)} seeds"
479
+ )
480
+
481
+ name = name or time.strftime("%Y%m%d-%H%M%S")
482
+ save_path = Path(output_dir) / name
483
+ save_path.mkdir(exist_ok=True, parents=True)
484
+
485
+ frame_idx = 0
486
+ frame_filepaths = []
487
+ for prompt_a, prompt_b, seed_a, seed_b in zip(prompts, prompts[1:], seeds, seeds[1:]):
488
+ # Embed Text
489
+ embed_a = self.embed_text(prompt_a)
490
+ embed_b = self.embed_text(prompt_b)
491
+
492
+ # Get Noise
493
+ noise_dtype = embed_a.dtype
494
+ noise_a = self.get_noise(seed_a, noise_dtype, height, width)
495
+ noise_b = self.get_noise(seed_b, noise_dtype, height, width)
496
+
497
+ noise_batch, embeds_batch = None, None
498
+ T = np.linspace(0.0, 1.0, num_interpolation_steps)
499
+ for i, t in enumerate(T):
500
+ noise = slerp(float(t), noise_a, noise_b)
501
+ embed = torch.lerp(embed_a, embed_b, t)
502
+
503
+ noise_batch = noise if noise_batch is None else torch.cat([noise_batch, noise], dim=0)
504
+ embeds_batch = embed if embeds_batch is None else torch.cat([embeds_batch, embed], dim=0)
505
+
506
+ batch_is_ready = embeds_batch.shape[0] == batch_size or i + 1 == T.shape[0]
507
+ if batch_is_ready:
508
+ outputs = self(
509
+ latents=noise_batch,
510
+ text_embeddings=embeds_batch,
511
+ height=height,
512
+ width=width,
513
+ guidance_scale=guidance_scale,
514
+ eta=eta,
515
+ num_inference_steps=num_inference_steps,
516
+ )
517
+ noise_batch, embeds_batch = None, None
518
+
519
+ for image in outputs["images"]:
520
+ frame_filepath = str(save_path / f"frame_{frame_idx:06d}.png")
521
+ image.save(frame_filepath)
522
+ frame_filepaths.append(frame_filepath)
523
+ frame_idx += 1
524
+ return frame_filepaths
v0.19.2/lpw_stable_diffusion.py ADDED
@@ -0,0 +1,1470 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import re
3
+ from typing import Any, Callable, Dict, List, Optional, Union
4
+
5
+ import numpy as np
6
+ import PIL
7
+ import torch
8
+ from packaging import version
9
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
10
+
11
+ from diffusers import DiffusionPipeline
12
+ from diffusers.configuration_utils import FrozenDict
13
+ from diffusers.image_processor import VaeImageProcessor
14
+ from diffusers.loaders import FromSingleFileMixin, LoraLoaderMixin, TextualInversionLoaderMixin
15
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
16
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput, StableDiffusionSafetyChecker
17
+ from diffusers.schedulers import KarrasDiffusionSchedulers
18
+ from diffusers.utils import (
19
+ PIL_INTERPOLATION,
20
+ deprecate,
21
+ is_accelerate_available,
22
+ is_accelerate_version,
23
+ logging,
24
+ randn_tensor,
25
+ )
26
+
27
+
28
+ # ------------------------------------------------------------------------------
29
+
30
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
31
+
32
+ re_attention = re.compile(
33
+ r"""
34
+ \\\(|
35
+ \\\)|
36
+ \\\[|
37
+ \\]|
38
+ \\\\|
39
+ \\|
40
+ \(|
41
+ \[|
42
+ :([+-]?[.\d]+)\)|
43
+ \)|
44
+ ]|
45
+ [^\\()\[\]:]+|
46
+ :
47
+ """,
48
+ re.X,
49
+ )
50
+
51
+
52
+ def parse_prompt_attention(text):
53
+ """
54
+ Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
55
+ Accepted tokens are:
56
+ (abc) - increases attention to abc by a multiplier of 1.1
57
+ (abc:3.12) - increases attention to abc by a multiplier of 3.12
58
+ [abc] - decreases attention to abc by a multiplier of 1.1
59
+ \( - literal character '('
60
+ \[ - literal character '['
61
+ \) - literal character ')'
62
+ \] - literal character ']'
63
+ \\ - literal character '\'
64
+ anything else - just text
65
+ >>> parse_prompt_attention('normal text')
66
+ [['normal text', 1.0]]
67
+ >>> parse_prompt_attention('an (important) word')
68
+ [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
69
+ >>> parse_prompt_attention('(unbalanced')
70
+ [['unbalanced', 1.1]]
71
+ >>> parse_prompt_attention('\(literal\]')
72
+ [['(literal]', 1.0]]
73
+ >>> parse_prompt_attention('(unnecessary)(parens)')
74
+ [['unnecessaryparens', 1.1]]
75
+ >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
76
+ [['a ', 1.0],
77
+ ['house', 1.5730000000000004],
78
+ [' ', 1.1],
79
+ ['on', 1.0],
80
+ [' a ', 1.1],
81
+ ['hill', 0.55],
82
+ [', sun, ', 1.1],
83
+ ['sky', 1.4641000000000006],
84
+ ['.', 1.1]]
85
+ """
86
+
87
+ res = []
88
+ round_brackets = []
89
+ square_brackets = []
90
+
91
+ round_bracket_multiplier = 1.1
92
+ square_bracket_multiplier = 1 / 1.1
93
+
94
+ def multiply_range(start_position, multiplier):
95
+ for p in range(start_position, len(res)):
96
+ res[p][1] *= multiplier
97
+
98
+ for m in re_attention.finditer(text):
99
+ text = m.group(0)
100
+ weight = m.group(1)
101
+
102
+ if text.startswith("\\"):
103
+ res.append([text[1:], 1.0])
104
+ elif text == "(":
105
+ round_brackets.append(len(res))
106
+ elif text == "[":
107
+ square_brackets.append(len(res))
108
+ elif weight is not None and len(round_brackets) > 0:
109
+ multiply_range(round_brackets.pop(), float(weight))
110
+ elif text == ")" and len(round_brackets) > 0:
111
+ multiply_range(round_brackets.pop(), round_bracket_multiplier)
112
+ elif text == "]" and len(square_brackets) > 0:
113
+ multiply_range(square_brackets.pop(), square_bracket_multiplier)
114
+ else:
115
+ res.append([text, 1.0])
116
+
117
+ for pos in round_brackets:
118
+ multiply_range(pos, round_bracket_multiplier)
119
+
120
+ for pos in square_brackets:
121
+ multiply_range(pos, square_bracket_multiplier)
122
+
123
+ if len(res) == 0:
124
+ res = [["", 1.0]]
125
+
126
+ # merge runs of identical weights
127
+ i = 0
128
+ while i + 1 < len(res):
129
+ if res[i][1] == res[i + 1][1]:
130
+ res[i][0] += res[i + 1][0]
131
+ res.pop(i + 1)
132
+ else:
133
+ i += 1
134
+
135
+ return res
136
+
137
+
138
+ def get_prompts_with_weights(pipe: DiffusionPipeline, prompt: List[str], max_length: int):
139
+ r"""
140
+ Tokenize a list of prompts and return its tokens with weights of each token.
141
+
142
+ No padding, starting or ending token is included.
143
+ """
144
+ tokens = []
145
+ weights = []
146
+ truncated = False
147
+ for text in prompt:
148
+ texts_and_weights = parse_prompt_attention(text)
149
+ text_token = []
150
+ text_weight = []
151
+ for word, weight in texts_and_weights:
152
+ # tokenize and discard the starting and the ending token
153
+ token = pipe.tokenizer(word).input_ids[1:-1]
154
+ text_token += token
155
+ # copy the weight by length of token
156
+ text_weight += [weight] * len(token)
157
+ # stop if the text is too long (longer than truncation limit)
158
+ if len(text_token) > max_length:
159
+ truncated = True
160
+ break
161
+ # truncate
162
+ if len(text_token) > max_length:
163
+ truncated = True
164
+ text_token = text_token[:max_length]
165
+ text_weight = text_weight[:max_length]
166
+ tokens.append(text_token)
167
+ weights.append(text_weight)
168
+ if truncated:
169
+ logger.warning("Prompt was truncated. Try to shorten the prompt or increase max_embeddings_multiples")
170
+ return tokens, weights
171
+
172
+
173
+ def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, pad, no_boseos_middle=True, chunk_length=77):
174
+ r"""
175
+ Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length.
176
+ """
177
+ max_embeddings_multiples = (max_length - 2) // (chunk_length - 2)
178
+ weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length
179
+ for i in range(len(tokens)):
180
+ tokens[i] = [bos] + tokens[i] + [pad] * (max_length - 1 - len(tokens[i]) - 1) + [eos]
181
+ if no_boseos_middle:
182
+ weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i]))
183
+ else:
184
+ w = []
185
+ if len(weights[i]) == 0:
186
+ w = [1.0] * weights_length
187
+ else:
188
+ for j in range(max_embeddings_multiples):
189
+ w.append(1.0) # weight for starting token in this chunk
190
+ w += weights[i][j * (chunk_length - 2) : min(len(weights[i]), (j + 1) * (chunk_length - 2))]
191
+ w.append(1.0) # weight for ending token in this chunk
192
+ w += [1.0] * (weights_length - len(w))
193
+ weights[i] = w[:]
194
+
195
+ return tokens, weights
196
+
197
+
198
+ def get_unweighted_text_embeddings(
199
+ pipe: DiffusionPipeline,
200
+ text_input: torch.Tensor,
201
+ chunk_length: int,
202
+ no_boseos_middle: Optional[bool] = True,
203
+ ):
204
+ """
205
+ When the length of tokens is a multiple of the capacity of the text encoder,
206
+ it should be split into chunks and sent to the text encoder individually.
207
+ """
208
+ max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2)
209
+ if max_embeddings_multiples > 1:
210
+ text_embeddings = []
211
+ for i in range(max_embeddings_multiples):
212
+ # extract the i-th chunk
213
+ text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].clone()
214
+
215
+ # cover the head and the tail by the starting and the ending tokens
216
+ text_input_chunk[:, 0] = text_input[0, 0]
217
+ text_input_chunk[:, -1] = text_input[0, -1]
218
+ text_embedding = pipe.text_encoder(text_input_chunk)[0]
219
+
220
+ if no_boseos_middle:
221
+ if i == 0:
222
+ # discard the ending token
223
+ text_embedding = text_embedding[:, :-1]
224
+ elif i == max_embeddings_multiples - 1:
225
+ # discard the starting token
226
+ text_embedding = text_embedding[:, 1:]
227
+ else:
228
+ # discard both starting and ending tokens
229
+ text_embedding = text_embedding[:, 1:-1]
230
+
231
+ text_embeddings.append(text_embedding)
232
+ text_embeddings = torch.concat(text_embeddings, axis=1)
233
+ else:
234
+ text_embeddings = pipe.text_encoder(text_input)[0]
235
+ return text_embeddings
236
+
237
+
238
+ def get_weighted_text_embeddings(
239
+ pipe: DiffusionPipeline,
240
+ prompt: Union[str, List[str]],
241
+ uncond_prompt: Optional[Union[str, List[str]]] = None,
242
+ max_embeddings_multiples: Optional[int] = 3,
243
+ no_boseos_middle: Optional[bool] = False,
244
+ skip_parsing: Optional[bool] = False,
245
+ skip_weighting: Optional[bool] = False,
246
+ ):
247
+ r"""
248
+ Prompts can be assigned with local weights using brackets. For example,
249
+ prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful',
250
+ and the embedding tokens corresponding to the words get multiplied by a constant, 1.1.
251
+
252
+ Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean.
253
+
254
+ Args:
255
+ pipe (`DiffusionPipeline`):
256
+ Pipe to provide access to the tokenizer and the text encoder.
257
+ prompt (`str` or `List[str]`):
258
+ The prompt or prompts to guide the image generation.
259
+ uncond_prompt (`str` or `List[str]`):
260
+ The unconditional prompt or prompts for guide the image generation. If unconditional prompt
261
+ is provided, the embeddings of prompt and uncond_prompt are concatenated.
262
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
263
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
264
+ no_boseos_middle (`bool`, *optional*, defaults to `False`):
265
+ If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and
266
+ ending token in each of the chunk in the middle.
267
+ skip_parsing (`bool`, *optional*, defaults to `False`):
268
+ Skip the parsing of brackets.
269
+ skip_weighting (`bool`, *optional*, defaults to `False`):
270
+ Skip the weighting. When the parsing is skipped, it is forced True.
271
+ """
272
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
273
+ if isinstance(prompt, str):
274
+ prompt = [prompt]
275
+
276
+ if not skip_parsing:
277
+ prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2)
278
+ if uncond_prompt is not None:
279
+ if isinstance(uncond_prompt, str):
280
+ uncond_prompt = [uncond_prompt]
281
+ uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2)
282
+ else:
283
+ prompt_tokens = [
284
+ token[1:-1] for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True).input_ids
285
+ ]
286
+ prompt_weights = [[1.0] * len(token) for token in prompt_tokens]
287
+ if uncond_prompt is not None:
288
+ if isinstance(uncond_prompt, str):
289
+ uncond_prompt = [uncond_prompt]
290
+ uncond_tokens = [
291
+ token[1:-1]
292
+ for token in pipe.tokenizer(uncond_prompt, max_length=max_length, truncation=True).input_ids
293
+ ]
294
+ uncond_weights = [[1.0] * len(token) for token in uncond_tokens]
295
+
296
+ # round up the longest length of tokens to a multiple of (model_max_length - 2)
297
+ max_length = max([len(token) for token in prompt_tokens])
298
+ if uncond_prompt is not None:
299
+ max_length = max(max_length, max([len(token) for token in uncond_tokens]))
300
+
301
+ max_embeddings_multiples = min(
302
+ max_embeddings_multiples,
303
+ (max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1,
304
+ )
305
+ max_embeddings_multiples = max(1, max_embeddings_multiples)
306
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
307
+
308
+ # pad the length of tokens and weights
309
+ bos = pipe.tokenizer.bos_token_id
310
+ eos = pipe.tokenizer.eos_token_id
311
+ pad = getattr(pipe.tokenizer, "pad_token_id", eos)
312
+ prompt_tokens, prompt_weights = pad_tokens_and_weights(
313
+ prompt_tokens,
314
+ prompt_weights,
315
+ max_length,
316
+ bos,
317
+ eos,
318
+ pad,
319
+ no_boseos_middle=no_boseos_middle,
320
+ chunk_length=pipe.tokenizer.model_max_length,
321
+ )
322
+ prompt_tokens = torch.tensor(prompt_tokens, dtype=torch.long, device=pipe.device)
323
+ if uncond_prompt is not None:
324
+ uncond_tokens, uncond_weights = pad_tokens_and_weights(
325
+ uncond_tokens,
326
+ uncond_weights,
327
+ max_length,
328
+ bos,
329
+ eos,
330
+ pad,
331
+ no_boseos_middle=no_boseos_middle,
332
+ chunk_length=pipe.tokenizer.model_max_length,
333
+ )
334
+ uncond_tokens = torch.tensor(uncond_tokens, dtype=torch.long, device=pipe.device)
335
+
336
+ # get the embeddings
337
+ text_embeddings = get_unweighted_text_embeddings(
338
+ pipe,
339
+ prompt_tokens,
340
+ pipe.tokenizer.model_max_length,
341
+ no_boseos_middle=no_boseos_middle,
342
+ )
343
+ prompt_weights = torch.tensor(prompt_weights, dtype=text_embeddings.dtype, device=text_embeddings.device)
344
+ if uncond_prompt is not None:
345
+ uncond_embeddings = get_unweighted_text_embeddings(
346
+ pipe,
347
+ uncond_tokens,
348
+ pipe.tokenizer.model_max_length,
349
+ no_boseos_middle=no_boseos_middle,
350
+ )
351
+ uncond_weights = torch.tensor(uncond_weights, dtype=uncond_embeddings.dtype, device=uncond_embeddings.device)
352
+
353
+ # assign weights to the prompts and normalize in the sense of mean
354
+ # TODO: should we normalize by chunk or in a whole (current implementation)?
355
+ if (not skip_parsing) and (not skip_weighting):
356
+ previous_mean = text_embeddings.float().mean(axis=[-2, -1]).to(text_embeddings.dtype)
357
+ text_embeddings *= prompt_weights.unsqueeze(-1)
358
+ current_mean = text_embeddings.float().mean(axis=[-2, -1]).to(text_embeddings.dtype)
359
+ text_embeddings *= (previous_mean / current_mean).unsqueeze(-1).unsqueeze(-1)
360
+ if uncond_prompt is not None:
361
+ previous_mean = uncond_embeddings.float().mean(axis=[-2, -1]).to(uncond_embeddings.dtype)
362
+ uncond_embeddings *= uncond_weights.unsqueeze(-1)
363
+ current_mean = uncond_embeddings.float().mean(axis=[-2, -1]).to(uncond_embeddings.dtype)
364
+ uncond_embeddings *= (previous_mean / current_mean).unsqueeze(-1).unsqueeze(-1)
365
+
366
+ if uncond_prompt is not None:
367
+ return text_embeddings, uncond_embeddings
368
+ return text_embeddings, None
369
+
370
+
371
+ def preprocess_image(image, batch_size):
372
+ w, h = image.size
373
+ w, h = (x - x % 8 for x in (w, h)) # resize to integer multiple of 8
374
+ image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
375
+ image = np.array(image).astype(np.float32) / 255.0
376
+ image = np.vstack([image[None].transpose(0, 3, 1, 2)] * batch_size)
377
+ image = torch.from_numpy(image)
378
+ return 2.0 * image - 1.0
379
+
380
+
381
+ def preprocess_mask(mask, batch_size, scale_factor=8):
382
+ if not isinstance(mask, torch.FloatTensor):
383
+ mask = mask.convert("L")
384
+ w, h = mask.size
385
+ w, h = (x - x % 8 for x in (w, h)) # resize to integer multiple of 8
386
+ mask = mask.resize((w // scale_factor, h // scale_factor), resample=PIL_INTERPOLATION["nearest"])
387
+ mask = np.array(mask).astype(np.float32) / 255.0
388
+ mask = np.tile(mask, (4, 1, 1))
389
+ mask = np.vstack([mask[None]] * batch_size)
390
+ mask = 1 - mask # repaint white, keep black
391
+ mask = torch.from_numpy(mask)
392
+ return mask
393
+
394
+ else:
395
+ valid_mask_channel_sizes = [1, 3]
396
+ # if mask channel is fourth tensor dimension, permute dimensions to pytorch standard (B, C, H, W)
397
+ if mask.shape[3] in valid_mask_channel_sizes:
398
+ mask = mask.permute(0, 3, 1, 2)
399
+ elif mask.shape[1] not in valid_mask_channel_sizes:
400
+ raise ValueError(
401
+ f"Mask channel dimension of size in {valid_mask_channel_sizes} should be second or fourth dimension,"
402
+ f" but received mask of shape {tuple(mask.shape)}"
403
+ )
404
+ # (potentially) reduce mask channel dimension from 3 to 1 for broadcasting to latent shape
405
+ mask = mask.mean(dim=1, keepdim=True)
406
+ h, w = mask.shape[-2:]
407
+ h, w = (x - x % 8 for x in (h, w)) # resize to integer multiple of 8
408
+ mask = torch.nn.functional.interpolate(mask, (h // scale_factor, w // scale_factor))
409
+ return mask
410
+
411
+
412
+ class StableDiffusionLongPromptWeightingPipeline(
413
+ DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin, FromSingleFileMixin
414
+ ):
415
+ r"""
416
+ Pipeline for text-to-image generation using Stable Diffusion without tokens length limit, and support parsing
417
+ weighting in prompt.
418
+
419
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
420
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
421
+
422
+ Args:
423
+ vae ([`AutoencoderKL`]):
424
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
425
+ text_encoder ([`CLIPTextModel`]):
426
+ Frozen text-encoder. Stable Diffusion uses the text portion of
427
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
428
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
429
+ tokenizer (`CLIPTokenizer`):
430
+ Tokenizer of class
431
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
432
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
433
+ scheduler ([`SchedulerMixin`]):
434
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
435
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
436
+ safety_checker ([`StableDiffusionSafetyChecker`]):
437
+ Classification module that estimates whether generated images could be considered offensive or harmful.
438
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
439
+ feature_extractor ([`CLIPImageProcessor`]):
440
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
441
+ """
442
+
443
+ _optional_components = ["safety_checker", "feature_extractor"]
444
+
445
+ def __init__(
446
+ self,
447
+ vae: AutoencoderKL,
448
+ text_encoder: CLIPTextModel,
449
+ tokenizer: CLIPTokenizer,
450
+ unet: UNet2DConditionModel,
451
+ scheduler: KarrasDiffusionSchedulers,
452
+ safety_checker: StableDiffusionSafetyChecker,
453
+ feature_extractor: CLIPImageProcessor,
454
+ requires_safety_checker: bool = True,
455
+ ):
456
+ super().__init__()
457
+
458
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
459
+ deprecation_message = (
460
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
461
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
462
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
463
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
464
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
465
+ " file"
466
+ )
467
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
468
+ new_config = dict(scheduler.config)
469
+ new_config["steps_offset"] = 1
470
+ scheduler._internal_dict = FrozenDict(new_config)
471
+
472
+ if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
473
+ deprecation_message = (
474
+ f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
475
+ " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
476
+ " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
477
+ " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
478
+ " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
479
+ )
480
+ deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
481
+ new_config = dict(scheduler.config)
482
+ new_config["clip_sample"] = False
483
+ scheduler._internal_dict = FrozenDict(new_config)
484
+
485
+ if safety_checker is None and requires_safety_checker:
486
+ logger.warning(
487
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
488
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
489
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
490
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
491
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
492
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
493
+ )
494
+
495
+ if safety_checker is not None and feature_extractor is None:
496
+ raise ValueError(
497
+ "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
498
+ " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
499
+ )
500
+
501
+ is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
502
+ version.parse(unet.config._diffusers_version).base_version
503
+ ) < version.parse("0.9.0.dev0")
504
+ is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
505
+ if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
506
+ deprecation_message = (
507
+ "The configuration file of the unet has set the default `sample_size` to smaller than"
508
+ " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the"
509
+ " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
510
+ " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
511
+ " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
512
+ " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
513
+ " in the config might lead to incorrect results in future versions. If you have downloaded this"
514
+ " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
515
+ " the `unet/config.json` file"
516
+ )
517
+ deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
518
+ new_config = dict(unet.config)
519
+ new_config["sample_size"] = 64
520
+ unet._internal_dict = FrozenDict(new_config)
521
+ self.register_modules(
522
+ vae=vae,
523
+ text_encoder=text_encoder,
524
+ tokenizer=tokenizer,
525
+ unet=unet,
526
+ scheduler=scheduler,
527
+ safety_checker=safety_checker,
528
+ feature_extractor=feature_extractor,
529
+ )
530
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
531
+
532
+ self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
533
+ self.register_to_config(
534
+ requires_safety_checker=requires_safety_checker,
535
+ )
536
+
537
+ def enable_vae_slicing(self):
538
+ r"""
539
+ Enable sliced VAE decoding.
540
+
541
+ When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several
542
+ steps. This is useful to save some memory and allow larger batch sizes.
543
+ """
544
+ self.vae.enable_slicing()
545
+
546
+ def disable_vae_slicing(self):
547
+ r"""
548
+ Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to
549
+ computing decoding in one step.
550
+ """
551
+ self.vae.disable_slicing()
552
+
553
+ def enable_vae_tiling(self):
554
+ r"""
555
+ Enable tiled VAE decoding.
556
+
557
+ When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in
558
+ several steps. This is useful to save a large amount of memory and to allow the processing of larger images.
559
+ """
560
+ self.vae.enable_tiling()
561
+
562
+ def disable_vae_tiling(self):
563
+ r"""
564
+ Disable tiled VAE decoding. If `enable_vae_tiling` was previously invoked, this method will go back to
565
+ computing decoding in one step.
566
+ """
567
+ self.vae.disable_tiling()
568
+
569
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_sequential_cpu_offload
570
+ def enable_sequential_cpu_offload(self, gpu_id=0):
571
+ r"""
572
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
573
+ text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
574
+ `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
575
+ Note that offloading happens on a submodule basis. Memory savings are higher than with
576
+ `enable_model_cpu_offload`, but performance is lower.
577
+ """
578
+ if is_accelerate_available() and is_accelerate_version(">=", "0.14.0"):
579
+ from accelerate import cpu_offload
580
+ else:
581
+ raise ImportError("`enable_sequential_cpu_offload` requires `accelerate v0.14.0` or higher")
582
+
583
+ device = torch.device(f"cuda:{gpu_id}")
584
+
585
+ if self.device.type != "cpu":
586
+ self.to("cpu", silence_dtype_warnings=True)
587
+ torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
588
+
589
+ for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]:
590
+ cpu_offload(cpu_offloaded_model, device)
591
+
592
+ if self.safety_checker is not None:
593
+ cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True)
594
+
595
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_model_cpu_offload
596
+ def enable_model_cpu_offload(self, gpu_id=0):
597
+ r"""
598
+ Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
599
+ to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
600
+ method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
601
+ `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
602
+ """
603
+ if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
604
+ from accelerate import cpu_offload_with_hook
605
+ else:
606
+ raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
607
+
608
+ device = torch.device(f"cuda:{gpu_id}")
609
+
610
+ if self.device.type != "cpu":
611
+ self.to("cpu", silence_dtype_warnings=True)
612
+ torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
613
+
614
+ hook = None
615
+ for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]:
616
+ _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
617
+
618
+ if self.safety_checker is not None:
619
+ _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
620
+
621
+ # We'll offload the last model manually.
622
+ self.final_offload_hook = hook
623
+
624
+ @property
625
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device
626
+ def _execution_device(self):
627
+ r"""
628
+ Returns the device on which the pipeline's models will be executed. After calling
629
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
630
+ hooks.
631
+ """
632
+ if not hasattr(self.unet, "_hf_hook"):
633
+ return self.device
634
+ for module in self.unet.modules():
635
+ if (
636
+ hasattr(module, "_hf_hook")
637
+ and hasattr(module._hf_hook, "execution_device")
638
+ and module._hf_hook.execution_device is not None
639
+ ):
640
+ return torch.device(module._hf_hook.execution_device)
641
+ return self.device
642
+
643
+ def _encode_prompt(
644
+ self,
645
+ prompt,
646
+ device,
647
+ num_images_per_prompt,
648
+ do_classifier_free_guidance,
649
+ negative_prompt=None,
650
+ max_embeddings_multiples=3,
651
+ prompt_embeds: Optional[torch.FloatTensor] = None,
652
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
653
+ ):
654
+ r"""
655
+ Encodes the prompt into text encoder hidden states.
656
+
657
+ Args:
658
+ prompt (`str` or `list(int)`):
659
+ prompt to be encoded
660
+ device: (`torch.device`):
661
+ torch device
662
+ num_images_per_prompt (`int`):
663
+ number of images that should be generated per prompt
664
+ do_classifier_free_guidance (`bool`):
665
+ whether to use classifier free guidance or not
666
+ negative_prompt (`str` or `List[str]`):
667
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
668
+ if `guidance_scale` is less than `1`).
669
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
670
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
671
+ """
672
+ if prompt is not None and isinstance(prompt, str):
673
+ batch_size = 1
674
+ elif prompt is not None and isinstance(prompt, list):
675
+ batch_size = len(prompt)
676
+ else:
677
+ batch_size = prompt_embeds.shape[0]
678
+
679
+ if negative_prompt_embeds is None:
680
+ if negative_prompt is None:
681
+ negative_prompt = [""] * batch_size
682
+ elif isinstance(negative_prompt, str):
683
+ negative_prompt = [negative_prompt] * batch_size
684
+ if batch_size != len(negative_prompt):
685
+ raise ValueError(
686
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
687
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
688
+ " the batch size of `prompt`."
689
+ )
690
+ if prompt_embeds is None or negative_prompt_embeds is None:
691
+ if isinstance(self, TextualInversionLoaderMixin):
692
+ prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
693
+ if do_classifier_free_guidance and negative_prompt_embeds is None:
694
+ negative_prompt = self.maybe_convert_prompt(negative_prompt, self.tokenizer)
695
+
696
+ prompt_embeds1, negative_prompt_embeds1 = get_weighted_text_embeddings(
697
+ pipe=self,
698
+ prompt=prompt,
699
+ uncond_prompt=negative_prompt if do_classifier_free_guidance else None,
700
+ max_embeddings_multiples=max_embeddings_multiples,
701
+ )
702
+ if prompt_embeds is None:
703
+ prompt_embeds = prompt_embeds1
704
+ if negative_prompt_embeds is None:
705
+ negative_prompt_embeds = negative_prompt_embeds1
706
+
707
+ bs_embed, seq_len, _ = prompt_embeds.shape
708
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
709
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
710
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
711
+
712
+ if do_classifier_free_guidance:
713
+ bs_embed, seq_len, _ = negative_prompt_embeds.shape
714
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
715
+ negative_prompt_embeds = negative_prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
716
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
717
+
718
+ return prompt_embeds
719
+
720
+ def check_inputs(
721
+ self,
722
+ prompt,
723
+ height,
724
+ width,
725
+ strength,
726
+ callback_steps,
727
+ negative_prompt=None,
728
+ prompt_embeds=None,
729
+ negative_prompt_embeds=None,
730
+ ):
731
+ if height % 8 != 0 or width % 8 != 0:
732
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
733
+
734
+ if strength < 0 or strength > 1:
735
+ raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
736
+
737
+ if (callback_steps is None) or (
738
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
739
+ ):
740
+ raise ValueError(
741
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
742
+ f" {type(callback_steps)}."
743
+ )
744
+
745
+ if prompt is not None and prompt_embeds is not None:
746
+ raise ValueError(
747
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
748
+ " only forward one of the two."
749
+ )
750
+ elif prompt is None and prompt_embeds is None:
751
+ raise ValueError(
752
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
753
+ )
754
+ elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
755
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
756
+
757
+ if negative_prompt is not None and negative_prompt_embeds is not None:
758
+ raise ValueError(
759
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
760
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
761
+ )
762
+
763
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
764
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
765
+ raise ValueError(
766
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
767
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
768
+ f" {negative_prompt_embeds.shape}."
769
+ )
770
+
771
+ def get_timesteps(self, num_inference_steps, strength, device, is_text2img):
772
+ if is_text2img:
773
+ return self.scheduler.timesteps.to(device), num_inference_steps
774
+ else:
775
+ # get the original timestep using init_timestep
776
+ init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
777
+
778
+ t_start = max(num_inference_steps - init_timestep, 0)
779
+ timesteps = self.scheduler.timesteps[t_start * self.scheduler.order :]
780
+
781
+ return timesteps, num_inference_steps - t_start
782
+
783
+ def run_safety_checker(self, image, device, dtype):
784
+ if self.safety_checker is not None:
785
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
786
+ image, has_nsfw_concept = self.safety_checker(
787
+ images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
788
+ )
789
+ else:
790
+ has_nsfw_concept = None
791
+ return image, has_nsfw_concept
792
+
793
+ def decode_latents(self, latents):
794
+ latents = 1 / self.vae.config.scaling_factor * latents
795
+ image = self.vae.decode(latents).sample
796
+ image = (image / 2 + 0.5).clamp(0, 1)
797
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
798
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
799
+ return image
800
+
801
+ def prepare_extra_step_kwargs(self, generator, eta):
802
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
803
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
804
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
805
+ # and should be between [0, 1]
806
+
807
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
808
+ extra_step_kwargs = {}
809
+ if accepts_eta:
810
+ extra_step_kwargs["eta"] = eta
811
+
812
+ # check if the scheduler accepts generator
813
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
814
+ if accepts_generator:
815
+ extra_step_kwargs["generator"] = generator
816
+ return extra_step_kwargs
817
+
818
+ def prepare_latents(
819
+ self,
820
+ image,
821
+ timestep,
822
+ num_images_per_prompt,
823
+ batch_size,
824
+ num_channels_latents,
825
+ height,
826
+ width,
827
+ dtype,
828
+ device,
829
+ generator,
830
+ latents=None,
831
+ ):
832
+ if image is None:
833
+ batch_size = batch_size * num_images_per_prompt
834
+ shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
835
+ if isinstance(generator, list) and len(generator) != batch_size:
836
+ raise ValueError(
837
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
838
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
839
+ )
840
+
841
+ if latents is None:
842
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
843
+ else:
844
+ latents = latents.to(device)
845
+
846
+ # scale the initial noise by the standard deviation required by the scheduler
847
+ latents = latents * self.scheduler.init_noise_sigma
848
+ return latents, None, None
849
+ else:
850
+ image = image.to(device=self.device, dtype=dtype)
851
+ init_latent_dist = self.vae.encode(image).latent_dist
852
+ init_latents = init_latent_dist.sample(generator=generator)
853
+ init_latents = self.vae.config.scaling_factor * init_latents
854
+
855
+ # Expand init_latents for batch_size and num_images_per_prompt
856
+ init_latents = torch.cat([init_latents] * num_images_per_prompt, dim=0)
857
+ init_latents_orig = init_latents
858
+
859
+ # add noise to latents using the timesteps
860
+ noise = randn_tensor(init_latents.shape, generator=generator, device=self.device, dtype=dtype)
861
+ init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
862
+ latents = init_latents
863
+ return latents, init_latents_orig, noise
864
+
865
+ @torch.no_grad()
866
+ def __call__(
867
+ self,
868
+ prompt: Union[str, List[str]],
869
+ negative_prompt: Optional[Union[str, List[str]]] = None,
870
+ image: Union[torch.FloatTensor, PIL.Image.Image] = None,
871
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
872
+ height: int = 512,
873
+ width: int = 512,
874
+ num_inference_steps: int = 50,
875
+ guidance_scale: float = 7.5,
876
+ strength: float = 0.8,
877
+ num_images_per_prompt: Optional[int] = 1,
878
+ add_predicted_noise: Optional[bool] = False,
879
+ eta: float = 0.0,
880
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
881
+ latents: Optional[torch.FloatTensor] = None,
882
+ prompt_embeds: Optional[torch.FloatTensor] = None,
883
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
884
+ max_embeddings_multiples: Optional[int] = 3,
885
+ output_type: Optional[str] = "pil",
886
+ return_dict: bool = True,
887
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
888
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
889
+ callback_steps: int = 1,
890
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
891
+ ):
892
+ r"""
893
+ Function invoked when calling the pipeline for generation.
894
+
895
+ Args:
896
+ prompt (`str` or `List[str]`):
897
+ The prompt or prompts to guide the image generation.
898
+ negative_prompt (`str` or `List[str]`, *optional*):
899
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
900
+ if `guidance_scale` is less than `1`).
901
+ image (`torch.FloatTensor` or `PIL.Image.Image`):
902
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
903
+ process.
904
+ mask_image (`torch.FloatTensor` or `PIL.Image.Image`):
905
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
906
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
907
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
908
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
909
+ height (`int`, *optional*, defaults to 512):
910
+ The height in pixels of the generated image.
911
+ width (`int`, *optional*, defaults to 512):
912
+ The width in pixels of the generated image.
913
+ num_inference_steps (`int`, *optional*, defaults to 50):
914
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
915
+ expense of slower inference.
916
+ guidance_scale (`float`, *optional*, defaults to 7.5):
917
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
918
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
919
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
920
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
921
+ usually at the expense of lower image quality.
922
+ strength (`float`, *optional*, defaults to 0.8):
923
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
924
+ `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
925
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
926
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
927
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
928
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
929
+ The number of images to generate per prompt.
930
+ add_predicted_noise (`bool`, *optional*, defaults to True):
931
+ Use predicted noise instead of random noise when constructing noisy versions of the original image in
932
+ the reverse diffusion process
933
+ eta (`float`, *optional*, defaults to 0.0):
934
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
935
+ [`schedulers.DDIMScheduler`], will be ignored for others.
936
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
937
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
938
+ to make generation deterministic.
939
+ latents (`torch.FloatTensor`, *optional*):
940
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
941
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
942
+ tensor will ge generated by sampling using the supplied random `generator`.
943
+ prompt_embeds (`torch.FloatTensor`, *optional*):
944
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
945
+ provided, text embeddings will be generated from `prompt` input argument.
946
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
947
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
948
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
949
+ argument.
950
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
951
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
952
+ output_type (`str`, *optional*, defaults to `"pil"`):
953
+ The output format of the generate image. Choose between
954
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
955
+ return_dict (`bool`, *optional*, defaults to `True`):
956
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
957
+ plain tuple.
958
+ callback (`Callable`, *optional*):
959
+ A function that will be called every `callback_steps` steps during inference. The function will be
960
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
961
+ is_cancelled_callback (`Callable`, *optional*):
962
+ A function that will be called every `callback_steps` steps during inference. If the function returns
963
+ `True`, the inference will be cancelled.
964
+ callback_steps (`int`, *optional*, defaults to 1):
965
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
966
+ called at every step.
967
+ cross_attention_kwargs (`dict`, *optional*):
968
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
969
+ `self.processor` in
970
+ [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
971
+
972
+ Returns:
973
+ `None` if cancelled by `is_cancelled_callback`,
974
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
975
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
976
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
977
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
978
+ (nsfw) content, according to the `safety_checker`.
979
+ """
980
+ # 0. Default height and width to unet
981
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
982
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
983
+
984
+ # 1. Check inputs. Raise error if not correct
985
+ self.check_inputs(
986
+ prompt, height, width, strength, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds
987
+ )
988
+
989
+ # 2. Define call parameters
990
+ if prompt is not None and isinstance(prompt, str):
991
+ batch_size = 1
992
+ elif prompt is not None and isinstance(prompt, list):
993
+ batch_size = len(prompt)
994
+ else:
995
+ batch_size = prompt_embeds.shape[0]
996
+
997
+ device = self._execution_device
998
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
999
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
1000
+ # corresponds to doing no classifier free guidance.
1001
+ do_classifier_free_guidance = guidance_scale > 1.0
1002
+
1003
+ # 3. Encode input prompt
1004
+ prompt_embeds = self._encode_prompt(
1005
+ prompt,
1006
+ device,
1007
+ num_images_per_prompt,
1008
+ do_classifier_free_guidance,
1009
+ negative_prompt,
1010
+ max_embeddings_multiples,
1011
+ prompt_embeds=prompt_embeds,
1012
+ negative_prompt_embeds=negative_prompt_embeds,
1013
+ )
1014
+ dtype = prompt_embeds.dtype
1015
+
1016
+ # 4. Preprocess image and mask
1017
+ if isinstance(image, PIL.Image.Image):
1018
+ image = preprocess_image(image, batch_size)
1019
+ if image is not None:
1020
+ image = image.to(device=self.device, dtype=dtype)
1021
+ if isinstance(mask_image, PIL.Image.Image):
1022
+ mask_image = preprocess_mask(mask_image, batch_size, self.vae_scale_factor)
1023
+ if mask_image is not None:
1024
+ mask = mask_image.to(device=self.device, dtype=dtype)
1025
+ mask = torch.cat([mask] * num_images_per_prompt)
1026
+ else:
1027
+ mask = None
1028
+
1029
+ # 5. set timesteps
1030
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
1031
+ timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device, image is None)
1032
+ latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
1033
+
1034
+ # 6. Prepare latent variables
1035
+ latents, init_latents_orig, noise = self.prepare_latents(
1036
+ image,
1037
+ latent_timestep,
1038
+ num_images_per_prompt,
1039
+ batch_size,
1040
+ self.unet.config.in_channels,
1041
+ height,
1042
+ width,
1043
+ dtype,
1044
+ device,
1045
+ generator,
1046
+ latents,
1047
+ )
1048
+
1049
+ # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
1050
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
1051
+
1052
+ # 8. Denoising loop
1053
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
1054
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
1055
+ for i, t in enumerate(timesteps):
1056
+ # expand the latents if we are doing classifier free guidance
1057
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
1058
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
1059
+
1060
+ # predict the noise residual
1061
+ noise_pred = self.unet(
1062
+ latent_model_input,
1063
+ t,
1064
+ encoder_hidden_states=prompt_embeds,
1065
+ cross_attention_kwargs=cross_attention_kwargs,
1066
+ ).sample
1067
+
1068
+ # perform guidance
1069
+ if do_classifier_free_guidance:
1070
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
1071
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
1072
+
1073
+ # compute the previous noisy sample x_t -> x_t-1
1074
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
1075
+
1076
+ if mask is not None:
1077
+ # masking
1078
+ if add_predicted_noise:
1079
+ init_latents_proper = self.scheduler.add_noise(
1080
+ init_latents_orig, noise_pred_uncond, torch.tensor([t])
1081
+ )
1082
+ else:
1083
+ init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, torch.tensor([t]))
1084
+ latents = (init_latents_proper * mask) + (latents * (1 - mask))
1085
+
1086
+ # call the callback, if provided
1087
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
1088
+ progress_bar.update()
1089
+ if i % callback_steps == 0:
1090
+ if callback is not None:
1091
+ callback(i, t, latents)
1092
+ if is_cancelled_callback is not None and is_cancelled_callback():
1093
+ return None
1094
+
1095
+ if output_type == "latent":
1096
+ image = latents
1097
+ has_nsfw_concept = None
1098
+ elif output_type == "pil":
1099
+ # 9. Post-processing
1100
+ image = self.decode_latents(latents)
1101
+
1102
+ # 10. Run safety checker
1103
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
1104
+
1105
+ # 11. Convert to PIL
1106
+ image = self.numpy_to_pil(image)
1107
+ else:
1108
+ # 9. Post-processing
1109
+ image = self.decode_latents(latents)
1110
+
1111
+ # 10. Run safety checker
1112
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
1113
+
1114
+ # Offload last model to CPU
1115
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
1116
+ self.final_offload_hook.offload()
1117
+
1118
+ if not return_dict:
1119
+ return image, has_nsfw_concept
1120
+
1121
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
1122
+
1123
+ def text2img(
1124
+ self,
1125
+ prompt: Union[str, List[str]],
1126
+ negative_prompt: Optional[Union[str, List[str]]] = None,
1127
+ height: int = 512,
1128
+ width: int = 512,
1129
+ num_inference_steps: int = 50,
1130
+ guidance_scale: float = 7.5,
1131
+ num_images_per_prompt: Optional[int] = 1,
1132
+ eta: float = 0.0,
1133
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
1134
+ latents: Optional[torch.FloatTensor] = None,
1135
+ prompt_embeds: Optional[torch.FloatTensor] = None,
1136
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
1137
+ max_embeddings_multiples: Optional[int] = 3,
1138
+ output_type: Optional[str] = "pil",
1139
+ return_dict: bool = True,
1140
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
1141
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
1142
+ callback_steps: int = 1,
1143
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
1144
+ ):
1145
+ r"""
1146
+ Function for text-to-image generation.
1147
+ Args:
1148
+ prompt (`str` or `List[str]`):
1149
+ The prompt or prompts to guide the image generation.
1150
+ negative_prompt (`str` or `List[str]`, *optional*):
1151
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
1152
+ if `guidance_scale` is less than `1`).
1153
+ height (`int`, *optional*, defaults to 512):
1154
+ The height in pixels of the generated image.
1155
+ width (`int`, *optional*, defaults to 512):
1156
+ The width in pixels of the generated image.
1157
+ num_inference_steps (`int`, *optional*, defaults to 50):
1158
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
1159
+ expense of slower inference.
1160
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1161
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1162
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1163
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1164
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1165
+ usually at the expense of lower image quality.
1166
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1167
+ The number of images to generate per prompt.
1168
+ eta (`float`, *optional*, defaults to 0.0):
1169
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1170
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1171
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
1172
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
1173
+ to make generation deterministic.
1174
+ latents (`torch.FloatTensor`, *optional*):
1175
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
1176
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
1177
+ tensor will ge generated by sampling using the supplied random `generator`.
1178
+ prompt_embeds (`torch.FloatTensor`, *optional*):
1179
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
1180
+ provided, text embeddings will be generated from `prompt` input argument.
1181
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
1182
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
1183
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
1184
+ argument.
1185
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1186
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1187
+ output_type (`str`, *optional*, defaults to `"pil"`):
1188
+ The output format of the generate image. Choose between
1189
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1190
+ return_dict (`bool`, *optional*, defaults to `True`):
1191
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1192
+ plain tuple.
1193
+ callback (`Callable`, *optional*):
1194
+ A function that will be called every `callback_steps` steps during inference. The function will be
1195
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
1196
+ is_cancelled_callback (`Callable`, *optional*):
1197
+ A function that will be called every `callback_steps` steps during inference. If the function returns
1198
+ `True`, the inference will be cancelled.
1199
+ callback_steps (`int`, *optional*, defaults to 1):
1200
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1201
+ called at every step.
1202
+ cross_attention_kwargs (`dict`, *optional*):
1203
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
1204
+ `self.processor` in
1205
+ [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
1206
+
1207
+ Returns:
1208
+ `None` if cancelled by `is_cancelled_callback`,
1209
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1210
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1211
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1212
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1213
+ (nsfw) content, according to the `safety_checker`.
1214
+ """
1215
+ return self.__call__(
1216
+ prompt=prompt,
1217
+ negative_prompt=negative_prompt,
1218
+ height=height,
1219
+ width=width,
1220
+ num_inference_steps=num_inference_steps,
1221
+ guidance_scale=guidance_scale,
1222
+ num_images_per_prompt=num_images_per_prompt,
1223
+ eta=eta,
1224
+ generator=generator,
1225
+ latents=latents,
1226
+ prompt_embeds=prompt_embeds,
1227
+ negative_prompt_embeds=negative_prompt_embeds,
1228
+ max_embeddings_multiples=max_embeddings_multiples,
1229
+ output_type=output_type,
1230
+ return_dict=return_dict,
1231
+ callback=callback,
1232
+ is_cancelled_callback=is_cancelled_callback,
1233
+ callback_steps=callback_steps,
1234
+ cross_attention_kwargs=cross_attention_kwargs,
1235
+ )
1236
+
1237
+ def img2img(
1238
+ self,
1239
+ image: Union[torch.FloatTensor, PIL.Image.Image],
1240
+ prompt: Union[str, List[str]],
1241
+ negative_prompt: Optional[Union[str, List[str]]] = None,
1242
+ strength: float = 0.8,
1243
+ num_inference_steps: Optional[int] = 50,
1244
+ guidance_scale: Optional[float] = 7.5,
1245
+ num_images_per_prompt: Optional[int] = 1,
1246
+ eta: Optional[float] = 0.0,
1247
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
1248
+ prompt_embeds: Optional[torch.FloatTensor] = None,
1249
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
1250
+ max_embeddings_multiples: Optional[int] = 3,
1251
+ output_type: Optional[str] = "pil",
1252
+ return_dict: bool = True,
1253
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
1254
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
1255
+ callback_steps: int = 1,
1256
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
1257
+ ):
1258
+ r"""
1259
+ Function for image-to-image generation.
1260
+ Args:
1261
+ image (`torch.FloatTensor` or `PIL.Image.Image`):
1262
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
1263
+ process.
1264
+ prompt (`str` or `List[str]`):
1265
+ The prompt or prompts to guide the image generation.
1266
+ negative_prompt (`str` or `List[str]`, *optional*):
1267
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
1268
+ if `guidance_scale` is less than `1`).
1269
+ strength (`float`, *optional*, defaults to 0.8):
1270
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
1271
+ `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
1272
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
1273
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
1274
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
1275
+ num_inference_steps (`int`, *optional*, defaults to 50):
1276
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
1277
+ expense of slower inference. This parameter will be modulated by `strength`.
1278
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1279
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1280
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1281
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1282
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1283
+ usually at the expense of lower image quality.
1284
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1285
+ The number of images to generate per prompt.
1286
+ eta (`float`, *optional*, defaults to 0.0):
1287
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1288
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1289
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
1290
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
1291
+ to make generation deterministic.
1292
+ prompt_embeds (`torch.FloatTensor`, *optional*):
1293
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
1294
+ provided, text embeddings will be generated from `prompt` input argument.
1295
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
1296
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
1297
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
1298
+ argument.
1299
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1300
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1301
+ output_type (`str`, *optional*, defaults to `"pil"`):
1302
+ The output format of the generate image. Choose between
1303
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1304
+ return_dict (`bool`, *optional*, defaults to `True`):
1305
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1306
+ plain tuple.
1307
+ callback (`Callable`, *optional*):
1308
+ A function that will be called every `callback_steps` steps during inference. The function will be
1309
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
1310
+ is_cancelled_callback (`Callable`, *optional*):
1311
+ A function that will be called every `callback_steps` steps during inference. If the function returns
1312
+ `True`, the inference will be cancelled.
1313
+ callback_steps (`int`, *optional*, defaults to 1):
1314
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1315
+ called at every step.
1316
+ cross_attention_kwargs (`dict`, *optional*):
1317
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
1318
+ `self.processor` in
1319
+ [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
1320
+
1321
+ Returns:
1322
+ `None` if cancelled by `is_cancelled_callback`,
1323
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1324
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1325
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1326
+ (nsfw) content, according to the `safety_checker`.
1327
+ """
1328
+ return self.__call__(
1329
+ prompt=prompt,
1330
+ negative_prompt=negative_prompt,
1331
+ image=image,
1332
+ num_inference_steps=num_inference_steps,
1333
+ guidance_scale=guidance_scale,
1334
+ strength=strength,
1335
+ num_images_per_prompt=num_images_per_prompt,
1336
+ eta=eta,
1337
+ generator=generator,
1338
+ prompt_embeds=prompt_embeds,
1339
+ negative_prompt_embeds=negative_prompt_embeds,
1340
+ max_embeddings_multiples=max_embeddings_multiples,
1341
+ output_type=output_type,
1342
+ return_dict=return_dict,
1343
+ callback=callback,
1344
+ is_cancelled_callback=is_cancelled_callback,
1345
+ callback_steps=callback_steps,
1346
+ cross_attention_kwargs=cross_attention_kwargs,
1347
+ )
1348
+
1349
+ def inpaint(
1350
+ self,
1351
+ image: Union[torch.FloatTensor, PIL.Image.Image],
1352
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image],
1353
+ prompt: Union[str, List[str]],
1354
+ negative_prompt: Optional[Union[str, List[str]]] = None,
1355
+ strength: float = 0.8,
1356
+ num_inference_steps: Optional[int] = 50,
1357
+ guidance_scale: Optional[float] = 7.5,
1358
+ num_images_per_prompt: Optional[int] = 1,
1359
+ add_predicted_noise: Optional[bool] = False,
1360
+ eta: Optional[float] = 0.0,
1361
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
1362
+ prompt_embeds: Optional[torch.FloatTensor] = None,
1363
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
1364
+ max_embeddings_multiples: Optional[int] = 3,
1365
+ output_type: Optional[str] = "pil",
1366
+ return_dict: bool = True,
1367
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
1368
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
1369
+ callback_steps: int = 1,
1370
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
1371
+ ):
1372
+ r"""
1373
+ Function for inpaint.
1374
+ Args:
1375
+ image (`torch.FloatTensor` or `PIL.Image.Image`):
1376
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
1377
+ process. This is the image whose masked region will be inpainted.
1378
+ mask_image (`torch.FloatTensor` or `PIL.Image.Image`):
1379
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
1380
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
1381
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
1382
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
1383
+ prompt (`str` or `List[str]`):
1384
+ The prompt or prompts to guide the image generation.
1385
+ negative_prompt (`str` or `List[str]`, *optional*):
1386
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
1387
+ if `guidance_scale` is less than `1`).
1388
+ strength (`float`, *optional*, defaults to 0.8):
1389
+ Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength`
1390
+ is 1, the denoising process will be run on the masked area for the full number of iterations specified
1391
+ in `num_inference_steps`. `image` will be used as a reference for the masked area, adding more
1392
+ noise to that region the larger the `strength`. If `strength` is 0, no inpainting will occur.
1393
+ num_inference_steps (`int`, *optional*, defaults to 50):
1394
+ The reference number of denoising steps. More denoising steps usually lead to a higher quality image at
1395
+ the expense of slower inference. This parameter will be modulated by `strength`, as explained above.
1396
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1397
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1398
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1399
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1400
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1401
+ usually at the expense of lower image quality.
1402
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1403
+ The number of images to generate per prompt.
1404
+ add_predicted_noise (`bool`, *optional*, defaults to True):
1405
+ Use predicted noise instead of random noise when constructing noisy versions of the original image in
1406
+ the reverse diffusion process
1407
+ eta (`float`, *optional*, defaults to 0.0):
1408
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1409
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1410
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
1411
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
1412
+ to make generation deterministic.
1413
+ prompt_embeds (`torch.FloatTensor`, *optional*):
1414
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
1415
+ provided, text embeddings will be generated from `prompt` input argument.
1416
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
1417
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
1418
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
1419
+ argument.
1420
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1421
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1422
+ output_type (`str`, *optional*, defaults to `"pil"`):
1423
+ The output format of the generate image. Choose between
1424
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1425
+ return_dict (`bool`, *optional*, defaults to `True`):
1426
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1427
+ plain tuple.
1428
+ callback (`Callable`, *optional*):
1429
+ A function that will be called every `callback_steps` steps during inference. The function will be
1430
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
1431
+ is_cancelled_callback (`Callable`, *optional*):
1432
+ A function that will be called every `callback_steps` steps during inference. If the function returns
1433
+ `True`, the inference will be cancelled.
1434
+ callback_steps (`int`, *optional*, defaults to 1):
1435
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1436
+ called at every step.
1437
+ cross_attention_kwargs (`dict`, *optional*):
1438
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
1439
+ `self.processor` in
1440
+ [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
1441
+
1442
+ Returns:
1443
+ `None` if cancelled by `is_cancelled_callback`,
1444
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1445
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1446
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1447
+ (nsfw) content, according to the `safety_checker`.
1448
+ """
1449
+ return self.__call__(
1450
+ prompt=prompt,
1451
+ negative_prompt=negative_prompt,
1452
+ image=image,
1453
+ mask_image=mask_image,
1454
+ num_inference_steps=num_inference_steps,
1455
+ guidance_scale=guidance_scale,
1456
+ strength=strength,
1457
+ num_images_per_prompt=num_images_per_prompt,
1458
+ add_predicted_noise=add_predicted_noise,
1459
+ eta=eta,
1460
+ generator=generator,
1461
+ prompt_embeds=prompt_embeds,
1462
+ negative_prompt_embeds=negative_prompt_embeds,
1463
+ max_embeddings_multiples=max_embeddings_multiples,
1464
+ output_type=output_type,
1465
+ return_dict=return_dict,
1466
+ callback=callback,
1467
+ is_cancelled_callback=is_cancelled_callback,
1468
+ callback_steps=callback_steps,
1469
+ cross_attention_kwargs=cross_attention_kwargs,
1470
+ )
v0.19.2/lpw_stable_diffusion_onnx.py ADDED
@@ -0,0 +1,1146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import re
3
+ from typing import Callable, List, Optional, Union
4
+
5
+ import numpy as np
6
+ import PIL
7
+ import torch
8
+ from packaging import version
9
+ from transformers import CLIPImageProcessor, CLIPTokenizer
10
+
11
+ import diffusers
12
+ from diffusers import OnnxRuntimeModel, OnnxStableDiffusionPipeline, SchedulerMixin
13
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
14
+ from diffusers.utils import logging
15
+
16
+
17
+ try:
18
+ from diffusers.pipelines.onnx_utils import ORT_TO_NP_TYPE
19
+ except ImportError:
20
+ ORT_TO_NP_TYPE = {
21
+ "tensor(bool)": np.bool_,
22
+ "tensor(int8)": np.int8,
23
+ "tensor(uint8)": np.uint8,
24
+ "tensor(int16)": np.int16,
25
+ "tensor(uint16)": np.uint16,
26
+ "tensor(int32)": np.int32,
27
+ "tensor(uint32)": np.uint32,
28
+ "tensor(int64)": np.int64,
29
+ "tensor(uint64)": np.uint64,
30
+ "tensor(float16)": np.float16,
31
+ "tensor(float)": np.float32,
32
+ "tensor(double)": np.float64,
33
+ }
34
+
35
+ try:
36
+ from diffusers.utils import PIL_INTERPOLATION
37
+ except ImportError:
38
+ if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
39
+ PIL_INTERPOLATION = {
40
+ "linear": PIL.Image.Resampling.BILINEAR,
41
+ "bilinear": PIL.Image.Resampling.BILINEAR,
42
+ "bicubic": PIL.Image.Resampling.BICUBIC,
43
+ "lanczos": PIL.Image.Resampling.LANCZOS,
44
+ "nearest": PIL.Image.Resampling.NEAREST,
45
+ }
46
+ else:
47
+ PIL_INTERPOLATION = {
48
+ "linear": PIL.Image.LINEAR,
49
+ "bilinear": PIL.Image.BILINEAR,
50
+ "bicubic": PIL.Image.BICUBIC,
51
+ "lanczos": PIL.Image.LANCZOS,
52
+ "nearest": PIL.Image.NEAREST,
53
+ }
54
+ # ------------------------------------------------------------------------------
55
+
56
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
57
+
58
+ re_attention = re.compile(
59
+ r"""
60
+ \\\(|
61
+ \\\)|
62
+ \\\[|
63
+ \\]|
64
+ \\\\|
65
+ \\|
66
+ \(|
67
+ \[|
68
+ :([+-]?[.\d]+)\)|
69
+ \)|
70
+ ]|
71
+ [^\\()\[\]:]+|
72
+ :
73
+ """,
74
+ re.X,
75
+ )
76
+
77
+
78
+ def parse_prompt_attention(text):
79
+ """
80
+ Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
81
+ Accepted tokens are:
82
+ (abc) - increases attention to abc by a multiplier of 1.1
83
+ (abc:3.12) - increases attention to abc by a multiplier of 3.12
84
+ [abc] - decreases attention to abc by a multiplier of 1.1
85
+ \( - literal character '('
86
+ \[ - literal character '['
87
+ \) - literal character ')'
88
+ \] - literal character ']'
89
+ \\ - literal character '\'
90
+ anything else - just text
91
+ >>> parse_prompt_attention('normal text')
92
+ [['normal text', 1.0]]
93
+ >>> parse_prompt_attention('an (important) word')
94
+ [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
95
+ >>> parse_prompt_attention('(unbalanced')
96
+ [['unbalanced', 1.1]]
97
+ >>> parse_prompt_attention('\(literal\]')
98
+ [['(literal]', 1.0]]
99
+ >>> parse_prompt_attention('(unnecessary)(parens)')
100
+ [['unnecessaryparens', 1.1]]
101
+ >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
102
+ [['a ', 1.0],
103
+ ['house', 1.5730000000000004],
104
+ [' ', 1.1],
105
+ ['on', 1.0],
106
+ [' a ', 1.1],
107
+ ['hill', 0.55],
108
+ [', sun, ', 1.1],
109
+ ['sky', 1.4641000000000006],
110
+ ['.', 1.1]]
111
+ """
112
+
113
+ res = []
114
+ round_brackets = []
115
+ square_brackets = []
116
+
117
+ round_bracket_multiplier = 1.1
118
+ square_bracket_multiplier = 1 / 1.1
119
+
120
+ def multiply_range(start_position, multiplier):
121
+ for p in range(start_position, len(res)):
122
+ res[p][1] *= multiplier
123
+
124
+ for m in re_attention.finditer(text):
125
+ text = m.group(0)
126
+ weight = m.group(1)
127
+
128
+ if text.startswith("\\"):
129
+ res.append([text[1:], 1.0])
130
+ elif text == "(":
131
+ round_brackets.append(len(res))
132
+ elif text == "[":
133
+ square_brackets.append(len(res))
134
+ elif weight is not None and len(round_brackets) > 0:
135
+ multiply_range(round_brackets.pop(), float(weight))
136
+ elif text == ")" and len(round_brackets) > 0:
137
+ multiply_range(round_brackets.pop(), round_bracket_multiplier)
138
+ elif text == "]" and len(square_brackets) > 0:
139
+ multiply_range(square_brackets.pop(), square_bracket_multiplier)
140
+ else:
141
+ res.append([text, 1.0])
142
+
143
+ for pos in round_brackets:
144
+ multiply_range(pos, round_bracket_multiplier)
145
+
146
+ for pos in square_brackets:
147
+ multiply_range(pos, square_bracket_multiplier)
148
+
149
+ if len(res) == 0:
150
+ res = [["", 1.0]]
151
+
152
+ # merge runs of identical weights
153
+ i = 0
154
+ while i + 1 < len(res):
155
+ if res[i][1] == res[i + 1][1]:
156
+ res[i][0] += res[i + 1][0]
157
+ res.pop(i + 1)
158
+ else:
159
+ i += 1
160
+
161
+ return res
162
+
163
+
164
+ def get_prompts_with_weights(pipe, prompt: List[str], max_length: int):
165
+ r"""
166
+ Tokenize a list of prompts and return its tokens with weights of each token.
167
+
168
+ No padding, starting or ending token is included.
169
+ """
170
+ tokens = []
171
+ weights = []
172
+ truncated = False
173
+ for text in prompt:
174
+ texts_and_weights = parse_prompt_attention(text)
175
+ text_token = []
176
+ text_weight = []
177
+ for word, weight in texts_and_weights:
178
+ # tokenize and discard the starting and the ending token
179
+ token = pipe.tokenizer(word, return_tensors="np").input_ids[0, 1:-1]
180
+ text_token += list(token)
181
+ # copy the weight by length of token
182
+ text_weight += [weight] * len(token)
183
+ # stop if the text is too long (longer than truncation limit)
184
+ if len(text_token) > max_length:
185
+ truncated = True
186
+ break
187
+ # truncate
188
+ if len(text_token) > max_length:
189
+ truncated = True
190
+ text_token = text_token[:max_length]
191
+ text_weight = text_weight[:max_length]
192
+ tokens.append(text_token)
193
+ weights.append(text_weight)
194
+ if truncated:
195
+ logger.warning("Prompt was truncated. Try to shorten the prompt or increase max_embeddings_multiples")
196
+ return tokens, weights
197
+
198
+
199
+ def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, pad, no_boseos_middle=True, chunk_length=77):
200
+ r"""
201
+ Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length.
202
+ """
203
+ max_embeddings_multiples = (max_length - 2) // (chunk_length - 2)
204
+ weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length
205
+ for i in range(len(tokens)):
206
+ tokens[i] = [bos] + tokens[i] + [pad] * (max_length - 1 - len(tokens[i]) - 1) + [eos]
207
+ if no_boseos_middle:
208
+ weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i]))
209
+ else:
210
+ w = []
211
+ if len(weights[i]) == 0:
212
+ w = [1.0] * weights_length
213
+ else:
214
+ for j in range(max_embeddings_multiples):
215
+ w.append(1.0) # weight for starting token in this chunk
216
+ w += weights[i][j * (chunk_length - 2) : min(len(weights[i]), (j + 1) * (chunk_length - 2))]
217
+ w.append(1.0) # weight for ending token in this chunk
218
+ w += [1.0] * (weights_length - len(w))
219
+ weights[i] = w[:]
220
+
221
+ return tokens, weights
222
+
223
+
224
+ def get_unweighted_text_embeddings(
225
+ pipe,
226
+ text_input: np.array,
227
+ chunk_length: int,
228
+ no_boseos_middle: Optional[bool] = True,
229
+ ):
230
+ """
231
+ When the length of tokens is a multiple of the capacity of the text encoder,
232
+ it should be split into chunks and sent to the text encoder individually.
233
+ """
234
+ max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2)
235
+ if max_embeddings_multiples > 1:
236
+ text_embeddings = []
237
+ for i in range(max_embeddings_multiples):
238
+ # extract the i-th chunk
239
+ text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].copy()
240
+
241
+ # cover the head and the tail by the starting and the ending tokens
242
+ text_input_chunk[:, 0] = text_input[0, 0]
243
+ text_input_chunk[:, -1] = text_input[0, -1]
244
+
245
+ text_embedding = pipe.text_encoder(input_ids=text_input_chunk)[0]
246
+
247
+ if no_boseos_middle:
248
+ if i == 0:
249
+ # discard the ending token
250
+ text_embedding = text_embedding[:, :-1]
251
+ elif i == max_embeddings_multiples - 1:
252
+ # discard the starting token
253
+ text_embedding = text_embedding[:, 1:]
254
+ else:
255
+ # discard both starting and ending tokens
256
+ text_embedding = text_embedding[:, 1:-1]
257
+
258
+ text_embeddings.append(text_embedding)
259
+ text_embeddings = np.concatenate(text_embeddings, axis=1)
260
+ else:
261
+ text_embeddings = pipe.text_encoder(input_ids=text_input)[0]
262
+ return text_embeddings
263
+
264
+
265
+ def get_weighted_text_embeddings(
266
+ pipe,
267
+ prompt: Union[str, List[str]],
268
+ uncond_prompt: Optional[Union[str, List[str]]] = None,
269
+ max_embeddings_multiples: Optional[int] = 4,
270
+ no_boseos_middle: Optional[bool] = False,
271
+ skip_parsing: Optional[bool] = False,
272
+ skip_weighting: Optional[bool] = False,
273
+ **kwargs,
274
+ ):
275
+ r"""
276
+ Prompts can be assigned with local weights using brackets. For example,
277
+ prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful',
278
+ and the embedding tokens corresponding to the words get multiplied by a constant, 1.1.
279
+
280
+ Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean.
281
+
282
+ Args:
283
+ pipe (`OnnxStableDiffusionPipeline`):
284
+ Pipe to provide access to the tokenizer and the text encoder.
285
+ prompt (`str` or `List[str]`):
286
+ The prompt or prompts to guide the image generation.
287
+ uncond_prompt (`str` or `List[str]`):
288
+ The unconditional prompt or prompts for guide the image generation. If unconditional prompt
289
+ is provided, the embeddings of prompt and uncond_prompt are concatenated.
290
+ max_embeddings_multiples (`int`, *optional*, defaults to `1`):
291
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
292
+ no_boseos_middle (`bool`, *optional*, defaults to `False`):
293
+ If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and
294
+ ending token in each of the chunk in the middle.
295
+ skip_parsing (`bool`, *optional*, defaults to `False`):
296
+ Skip the parsing of brackets.
297
+ skip_weighting (`bool`, *optional*, defaults to `False`):
298
+ Skip the weighting. When the parsing is skipped, it is forced True.
299
+ """
300
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
301
+ if isinstance(prompt, str):
302
+ prompt = [prompt]
303
+
304
+ if not skip_parsing:
305
+ prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2)
306
+ if uncond_prompt is not None:
307
+ if isinstance(uncond_prompt, str):
308
+ uncond_prompt = [uncond_prompt]
309
+ uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2)
310
+ else:
311
+ prompt_tokens = [
312
+ token[1:-1]
313
+ for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True, return_tensors="np").input_ids
314
+ ]
315
+ prompt_weights = [[1.0] * len(token) for token in prompt_tokens]
316
+ if uncond_prompt is not None:
317
+ if isinstance(uncond_prompt, str):
318
+ uncond_prompt = [uncond_prompt]
319
+ uncond_tokens = [
320
+ token[1:-1]
321
+ for token in pipe.tokenizer(
322
+ uncond_prompt,
323
+ max_length=max_length,
324
+ truncation=True,
325
+ return_tensors="np",
326
+ ).input_ids
327
+ ]
328
+ uncond_weights = [[1.0] * len(token) for token in uncond_tokens]
329
+
330
+ # round up the longest length of tokens to a multiple of (model_max_length - 2)
331
+ max_length = max([len(token) for token in prompt_tokens])
332
+ if uncond_prompt is not None:
333
+ max_length = max(max_length, max([len(token) for token in uncond_tokens]))
334
+
335
+ max_embeddings_multiples = min(
336
+ max_embeddings_multiples,
337
+ (max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1,
338
+ )
339
+ max_embeddings_multiples = max(1, max_embeddings_multiples)
340
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
341
+
342
+ # pad the length of tokens and weights
343
+ bos = pipe.tokenizer.bos_token_id
344
+ eos = pipe.tokenizer.eos_token_id
345
+ pad = getattr(pipe.tokenizer, "pad_token_id", eos)
346
+ prompt_tokens, prompt_weights = pad_tokens_and_weights(
347
+ prompt_tokens,
348
+ prompt_weights,
349
+ max_length,
350
+ bos,
351
+ eos,
352
+ pad,
353
+ no_boseos_middle=no_boseos_middle,
354
+ chunk_length=pipe.tokenizer.model_max_length,
355
+ )
356
+ prompt_tokens = np.array(prompt_tokens, dtype=np.int32)
357
+ if uncond_prompt is not None:
358
+ uncond_tokens, uncond_weights = pad_tokens_and_weights(
359
+ uncond_tokens,
360
+ uncond_weights,
361
+ max_length,
362
+ bos,
363
+ eos,
364
+ pad,
365
+ no_boseos_middle=no_boseos_middle,
366
+ chunk_length=pipe.tokenizer.model_max_length,
367
+ )
368
+ uncond_tokens = np.array(uncond_tokens, dtype=np.int32)
369
+
370
+ # get the embeddings
371
+ text_embeddings = get_unweighted_text_embeddings(
372
+ pipe,
373
+ prompt_tokens,
374
+ pipe.tokenizer.model_max_length,
375
+ no_boseos_middle=no_boseos_middle,
376
+ )
377
+ prompt_weights = np.array(prompt_weights, dtype=text_embeddings.dtype)
378
+ if uncond_prompt is not None:
379
+ uncond_embeddings = get_unweighted_text_embeddings(
380
+ pipe,
381
+ uncond_tokens,
382
+ pipe.tokenizer.model_max_length,
383
+ no_boseos_middle=no_boseos_middle,
384
+ )
385
+ uncond_weights = np.array(uncond_weights, dtype=uncond_embeddings.dtype)
386
+
387
+ # assign weights to the prompts and normalize in the sense of mean
388
+ # TODO: should we normalize by chunk or in a whole (current implementation)?
389
+ if (not skip_parsing) and (not skip_weighting):
390
+ previous_mean = text_embeddings.mean(axis=(-2, -1))
391
+ text_embeddings *= prompt_weights[:, :, None]
392
+ text_embeddings *= (previous_mean / text_embeddings.mean(axis=(-2, -1)))[:, None, None]
393
+ if uncond_prompt is not None:
394
+ previous_mean = uncond_embeddings.mean(axis=(-2, -1))
395
+ uncond_embeddings *= uncond_weights[:, :, None]
396
+ uncond_embeddings *= (previous_mean / uncond_embeddings.mean(axis=(-2, -1)))[:, None, None]
397
+
398
+ # For classifier free guidance, we need to do two forward passes.
399
+ # Here we concatenate the unconditional and text embeddings into a single batch
400
+ # to avoid doing two forward passes
401
+ if uncond_prompt is not None:
402
+ return text_embeddings, uncond_embeddings
403
+
404
+ return text_embeddings
405
+
406
+
407
+ def preprocess_image(image):
408
+ w, h = image.size
409
+ w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
410
+ image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
411
+ image = np.array(image).astype(np.float32) / 255.0
412
+ image = image[None].transpose(0, 3, 1, 2)
413
+ return 2.0 * image - 1.0
414
+
415
+
416
+ def preprocess_mask(mask, scale_factor=8):
417
+ mask = mask.convert("L")
418
+ w, h = mask.size
419
+ w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
420
+ mask = mask.resize((w // scale_factor, h // scale_factor), resample=PIL_INTERPOLATION["nearest"])
421
+ mask = np.array(mask).astype(np.float32) / 255.0
422
+ mask = np.tile(mask, (4, 1, 1))
423
+ mask = mask[None].transpose(0, 1, 2, 3) # what does this step do?
424
+ mask = 1 - mask # repaint white, keep black
425
+ return mask
426
+
427
+
428
+ class OnnxStableDiffusionLongPromptWeightingPipeline(OnnxStableDiffusionPipeline):
429
+ r"""
430
+ Pipeline for text-to-image generation using Stable Diffusion without tokens length limit, and support parsing
431
+ weighting in prompt.
432
+
433
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
434
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
435
+ """
436
+ if version.parse(version.parse(diffusers.__version__).base_version) >= version.parse("0.9.0"):
437
+
438
+ def __init__(
439
+ self,
440
+ vae_encoder: OnnxRuntimeModel,
441
+ vae_decoder: OnnxRuntimeModel,
442
+ text_encoder: OnnxRuntimeModel,
443
+ tokenizer: CLIPTokenizer,
444
+ unet: OnnxRuntimeModel,
445
+ scheduler: SchedulerMixin,
446
+ safety_checker: OnnxRuntimeModel,
447
+ feature_extractor: CLIPImageProcessor,
448
+ requires_safety_checker: bool = True,
449
+ ):
450
+ super().__init__(
451
+ vae_encoder=vae_encoder,
452
+ vae_decoder=vae_decoder,
453
+ text_encoder=text_encoder,
454
+ tokenizer=tokenizer,
455
+ unet=unet,
456
+ scheduler=scheduler,
457
+ safety_checker=safety_checker,
458
+ feature_extractor=feature_extractor,
459
+ requires_safety_checker=requires_safety_checker,
460
+ )
461
+ self.__init__additional__()
462
+
463
+ else:
464
+
465
+ def __init__(
466
+ self,
467
+ vae_encoder: OnnxRuntimeModel,
468
+ vae_decoder: OnnxRuntimeModel,
469
+ text_encoder: OnnxRuntimeModel,
470
+ tokenizer: CLIPTokenizer,
471
+ unet: OnnxRuntimeModel,
472
+ scheduler: SchedulerMixin,
473
+ safety_checker: OnnxRuntimeModel,
474
+ feature_extractor: CLIPImageProcessor,
475
+ ):
476
+ super().__init__(
477
+ vae_encoder=vae_encoder,
478
+ vae_decoder=vae_decoder,
479
+ text_encoder=text_encoder,
480
+ tokenizer=tokenizer,
481
+ unet=unet,
482
+ scheduler=scheduler,
483
+ safety_checker=safety_checker,
484
+ feature_extractor=feature_extractor,
485
+ )
486
+ self.__init__additional__()
487
+
488
+ def __init__additional__(self):
489
+ self.unet.config.in_channels = 4
490
+ self.vae_scale_factor = 8
491
+
492
+ def _encode_prompt(
493
+ self,
494
+ prompt,
495
+ num_images_per_prompt,
496
+ do_classifier_free_guidance,
497
+ negative_prompt,
498
+ max_embeddings_multiples,
499
+ ):
500
+ r"""
501
+ Encodes the prompt into text encoder hidden states.
502
+
503
+ Args:
504
+ prompt (`str` or `list(int)`):
505
+ prompt to be encoded
506
+ num_images_per_prompt (`int`):
507
+ number of images that should be generated per prompt
508
+ do_classifier_free_guidance (`bool`):
509
+ whether to use classifier free guidance or not
510
+ negative_prompt (`str` or `List[str]`):
511
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
512
+ if `guidance_scale` is less than `1`).
513
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
514
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
515
+ """
516
+ batch_size = len(prompt) if isinstance(prompt, list) else 1
517
+
518
+ if negative_prompt is None:
519
+ negative_prompt = [""] * batch_size
520
+ elif isinstance(negative_prompt, str):
521
+ negative_prompt = [negative_prompt] * batch_size
522
+ if batch_size != len(negative_prompt):
523
+ raise ValueError(
524
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
525
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
526
+ " the batch size of `prompt`."
527
+ )
528
+
529
+ text_embeddings, uncond_embeddings = get_weighted_text_embeddings(
530
+ pipe=self,
531
+ prompt=prompt,
532
+ uncond_prompt=negative_prompt if do_classifier_free_guidance else None,
533
+ max_embeddings_multiples=max_embeddings_multiples,
534
+ )
535
+
536
+ text_embeddings = text_embeddings.repeat(num_images_per_prompt, 0)
537
+ if do_classifier_free_guidance:
538
+ uncond_embeddings = uncond_embeddings.repeat(num_images_per_prompt, 0)
539
+ text_embeddings = np.concatenate([uncond_embeddings, text_embeddings])
540
+
541
+ return text_embeddings
542
+
543
+ def check_inputs(self, prompt, height, width, strength, callback_steps):
544
+ if not isinstance(prompt, str) and not isinstance(prompt, list):
545
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
546
+
547
+ if strength < 0 or strength > 1:
548
+ raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
549
+
550
+ if height % 8 != 0 or width % 8 != 0:
551
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
552
+
553
+ if (callback_steps is None) or (
554
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
555
+ ):
556
+ raise ValueError(
557
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
558
+ f" {type(callback_steps)}."
559
+ )
560
+
561
+ def get_timesteps(self, num_inference_steps, strength, is_text2img):
562
+ if is_text2img:
563
+ return self.scheduler.timesteps, num_inference_steps
564
+ else:
565
+ # get the original timestep using init_timestep
566
+ offset = self.scheduler.config.get("steps_offset", 0)
567
+ init_timestep = int(num_inference_steps * strength) + offset
568
+ init_timestep = min(init_timestep, num_inference_steps)
569
+
570
+ t_start = max(num_inference_steps - init_timestep + offset, 0)
571
+ timesteps = self.scheduler.timesteps[t_start:]
572
+ return timesteps, num_inference_steps - t_start
573
+
574
+ def run_safety_checker(self, image):
575
+ if self.safety_checker is not None:
576
+ safety_checker_input = self.feature_extractor(
577
+ self.numpy_to_pil(image), return_tensors="np"
578
+ ).pixel_values.astype(image.dtype)
579
+ # There will throw an error if use safety_checker directly and batchsize>1
580
+ images, has_nsfw_concept = [], []
581
+ for i in range(image.shape[0]):
582
+ image_i, has_nsfw_concept_i = self.safety_checker(
583
+ clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1]
584
+ )
585
+ images.append(image_i)
586
+ has_nsfw_concept.append(has_nsfw_concept_i[0])
587
+ image = np.concatenate(images)
588
+ else:
589
+ has_nsfw_concept = None
590
+ return image, has_nsfw_concept
591
+
592
+ def decode_latents(self, latents):
593
+ latents = 1 / 0.18215 * latents
594
+ # image = self.vae_decoder(latent_sample=latents)[0]
595
+ # it seems likes there is a strange result for using half-precision vae decoder if batchsize>1
596
+ image = np.concatenate(
597
+ [self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])]
598
+ )
599
+ image = np.clip(image / 2 + 0.5, 0, 1)
600
+ image = image.transpose((0, 2, 3, 1))
601
+ return image
602
+
603
+ def prepare_extra_step_kwargs(self, generator, eta):
604
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
605
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
606
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
607
+ # and should be between [0, 1]
608
+
609
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
610
+ extra_step_kwargs = {}
611
+ if accepts_eta:
612
+ extra_step_kwargs["eta"] = eta
613
+
614
+ # check if the scheduler accepts generator
615
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
616
+ if accepts_generator:
617
+ extra_step_kwargs["generator"] = generator
618
+ return extra_step_kwargs
619
+
620
+ def prepare_latents(self, image, timestep, batch_size, height, width, dtype, generator, latents=None):
621
+ if image is None:
622
+ shape = (
623
+ batch_size,
624
+ self.unet.config.in_channels,
625
+ height // self.vae_scale_factor,
626
+ width // self.vae_scale_factor,
627
+ )
628
+
629
+ if latents is None:
630
+ latents = torch.randn(shape, generator=generator, device="cpu").numpy().astype(dtype)
631
+ else:
632
+ if latents.shape != shape:
633
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
634
+
635
+ # scale the initial noise by the standard deviation required by the scheduler
636
+ latents = (torch.from_numpy(latents) * self.scheduler.init_noise_sigma).numpy()
637
+ return latents, None, None
638
+ else:
639
+ init_latents = self.vae_encoder(sample=image)[0]
640
+ init_latents = 0.18215 * init_latents
641
+ init_latents = np.concatenate([init_latents] * batch_size, axis=0)
642
+ init_latents_orig = init_latents
643
+ shape = init_latents.shape
644
+
645
+ # add noise to latents using the timesteps
646
+ noise = torch.randn(shape, generator=generator, device="cpu").numpy().astype(dtype)
647
+ latents = self.scheduler.add_noise(
648
+ torch.from_numpy(init_latents), torch.from_numpy(noise), timestep
649
+ ).numpy()
650
+ return latents, init_latents_orig, noise
651
+
652
+ @torch.no_grad()
653
+ def __call__(
654
+ self,
655
+ prompt: Union[str, List[str]],
656
+ negative_prompt: Optional[Union[str, List[str]]] = None,
657
+ image: Union[np.ndarray, PIL.Image.Image] = None,
658
+ mask_image: Union[np.ndarray, PIL.Image.Image] = None,
659
+ height: int = 512,
660
+ width: int = 512,
661
+ num_inference_steps: int = 50,
662
+ guidance_scale: float = 7.5,
663
+ strength: float = 0.8,
664
+ num_images_per_prompt: Optional[int] = 1,
665
+ eta: float = 0.0,
666
+ generator: Optional[torch.Generator] = None,
667
+ latents: Optional[np.ndarray] = None,
668
+ max_embeddings_multiples: Optional[int] = 3,
669
+ output_type: Optional[str] = "pil",
670
+ return_dict: bool = True,
671
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
672
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
673
+ callback_steps: int = 1,
674
+ **kwargs,
675
+ ):
676
+ r"""
677
+ Function invoked when calling the pipeline for generation.
678
+
679
+ Args:
680
+ prompt (`str` or `List[str]`):
681
+ The prompt or prompts to guide the image generation.
682
+ negative_prompt (`str` or `List[str]`, *optional*):
683
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
684
+ if `guidance_scale` is less than `1`).
685
+ image (`np.ndarray` or `PIL.Image.Image`):
686
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
687
+ process.
688
+ mask_image (`np.ndarray` or `PIL.Image.Image`):
689
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
690
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
691
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
692
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
693
+ height (`int`, *optional*, defaults to 512):
694
+ The height in pixels of the generated image.
695
+ width (`int`, *optional*, defaults to 512):
696
+ The width in pixels of the generated image.
697
+ num_inference_steps (`int`, *optional*, defaults to 50):
698
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
699
+ expense of slower inference.
700
+ guidance_scale (`float`, *optional*, defaults to 7.5):
701
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
702
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
703
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
704
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
705
+ usually at the expense of lower image quality.
706
+ strength (`float`, *optional*, defaults to 0.8):
707
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
708
+ `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
709
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
710
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
711
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
712
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
713
+ The number of images to generate per prompt.
714
+ eta (`float`, *optional*, defaults to 0.0):
715
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
716
+ [`schedulers.DDIMScheduler`], will be ignored for others.
717
+ generator (`torch.Generator`, *optional*):
718
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
719
+ deterministic.
720
+ latents (`np.ndarray`, *optional*):
721
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
722
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
723
+ tensor will ge generated by sampling using the supplied random `generator`.
724
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
725
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
726
+ output_type (`str`, *optional*, defaults to `"pil"`):
727
+ The output format of the generate image. Choose between
728
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
729
+ return_dict (`bool`, *optional*, defaults to `True`):
730
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
731
+ plain tuple.
732
+ callback (`Callable`, *optional*):
733
+ A function that will be called every `callback_steps` steps during inference. The function will be
734
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
735
+ is_cancelled_callback (`Callable`, *optional*):
736
+ A function that will be called every `callback_steps` steps during inference. If the function returns
737
+ `True`, the inference will be cancelled.
738
+ callback_steps (`int`, *optional*, defaults to 1):
739
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
740
+ called at every step.
741
+
742
+ Returns:
743
+ `None` if cancelled by `is_cancelled_callback`,
744
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
745
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
746
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
747
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
748
+ (nsfw) content, according to the `safety_checker`.
749
+ """
750
+ # 0. Default height and width to unet
751
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
752
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
753
+
754
+ # 1. Check inputs. Raise error if not correct
755
+ self.check_inputs(prompt, height, width, strength, callback_steps)
756
+
757
+ # 2. Define call parameters
758
+ batch_size = 1 if isinstance(prompt, str) else len(prompt)
759
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
760
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
761
+ # corresponds to doing no classifier free guidance.
762
+ do_classifier_free_guidance = guidance_scale > 1.0
763
+
764
+ # 3. Encode input prompt
765
+ text_embeddings = self._encode_prompt(
766
+ prompt,
767
+ num_images_per_prompt,
768
+ do_classifier_free_guidance,
769
+ negative_prompt,
770
+ max_embeddings_multiples,
771
+ )
772
+ dtype = text_embeddings.dtype
773
+
774
+ # 4. Preprocess image and mask
775
+ if isinstance(image, PIL.Image.Image):
776
+ image = preprocess_image(image)
777
+ if image is not None:
778
+ image = image.astype(dtype)
779
+ if isinstance(mask_image, PIL.Image.Image):
780
+ mask_image = preprocess_mask(mask_image, self.vae_scale_factor)
781
+ if mask_image is not None:
782
+ mask = mask_image.astype(dtype)
783
+ mask = np.concatenate([mask] * batch_size * num_images_per_prompt)
784
+ else:
785
+ mask = None
786
+
787
+ # 5. set timesteps
788
+ self.scheduler.set_timesteps(num_inference_steps)
789
+ timestep_dtype = next(
790
+ (input.type for input in self.unet.model.get_inputs() if input.name == "timestep"), "tensor(float)"
791
+ )
792
+ timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype]
793
+ timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, image is None)
794
+ latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
795
+
796
+ # 6. Prepare latent variables
797
+ latents, init_latents_orig, noise = self.prepare_latents(
798
+ image,
799
+ latent_timestep,
800
+ batch_size * num_images_per_prompt,
801
+ height,
802
+ width,
803
+ dtype,
804
+ generator,
805
+ latents,
806
+ )
807
+
808
+ # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
809
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
810
+
811
+ # 8. Denoising loop
812
+ for i, t in enumerate(self.progress_bar(timesteps)):
813
+ # expand the latents if we are doing classifier free guidance
814
+ latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents
815
+ latent_model_input = self.scheduler.scale_model_input(torch.from_numpy(latent_model_input), t)
816
+ latent_model_input = latent_model_input.numpy()
817
+
818
+ # predict the noise residual
819
+ noise_pred = self.unet(
820
+ sample=latent_model_input,
821
+ timestep=np.array([t], dtype=timestep_dtype),
822
+ encoder_hidden_states=text_embeddings,
823
+ )
824
+ noise_pred = noise_pred[0]
825
+
826
+ # perform guidance
827
+ if do_classifier_free_guidance:
828
+ noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2)
829
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
830
+
831
+ # compute the previous noisy sample x_t -> x_t-1
832
+ scheduler_output = self.scheduler.step(
833
+ torch.from_numpy(noise_pred), t, torch.from_numpy(latents), **extra_step_kwargs
834
+ )
835
+ latents = scheduler_output.prev_sample.numpy()
836
+
837
+ if mask is not None:
838
+ # masking
839
+ init_latents_proper = self.scheduler.add_noise(
840
+ torch.from_numpy(init_latents_orig),
841
+ torch.from_numpy(noise),
842
+ t,
843
+ ).numpy()
844
+ latents = (init_latents_proper * mask) + (latents * (1 - mask))
845
+
846
+ # call the callback, if provided
847
+ if i % callback_steps == 0:
848
+ if callback is not None:
849
+ callback(i, t, latents)
850
+ if is_cancelled_callback is not None and is_cancelled_callback():
851
+ return None
852
+
853
+ # 9. Post-processing
854
+ image = self.decode_latents(latents)
855
+
856
+ # 10. Run safety checker
857
+ image, has_nsfw_concept = self.run_safety_checker(image)
858
+
859
+ # 11. Convert to PIL
860
+ if output_type == "pil":
861
+ image = self.numpy_to_pil(image)
862
+
863
+ if not return_dict:
864
+ return image, has_nsfw_concept
865
+
866
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
867
+
868
+ def text2img(
869
+ self,
870
+ prompt: Union[str, List[str]],
871
+ negative_prompt: Optional[Union[str, List[str]]] = None,
872
+ height: int = 512,
873
+ width: int = 512,
874
+ num_inference_steps: int = 50,
875
+ guidance_scale: float = 7.5,
876
+ num_images_per_prompt: Optional[int] = 1,
877
+ eta: float = 0.0,
878
+ generator: Optional[torch.Generator] = None,
879
+ latents: Optional[np.ndarray] = None,
880
+ max_embeddings_multiples: Optional[int] = 3,
881
+ output_type: Optional[str] = "pil",
882
+ return_dict: bool = True,
883
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
884
+ callback_steps: int = 1,
885
+ **kwargs,
886
+ ):
887
+ r"""
888
+ Function for text-to-image generation.
889
+ Args:
890
+ prompt (`str` or `List[str]`):
891
+ The prompt or prompts to guide the image generation.
892
+ negative_prompt (`str` or `List[str]`, *optional*):
893
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
894
+ if `guidance_scale` is less than `1`).
895
+ height (`int`, *optional*, defaults to 512):
896
+ The height in pixels of the generated image.
897
+ width (`int`, *optional*, defaults to 512):
898
+ The width in pixels of the generated image.
899
+ num_inference_steps (`int`, *optional*, defaults to 50):
900
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
901
+ expense of slower inference.
902
+ guidance_scale (`float`, *optional*, defaults to 7.5):
903
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
904
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
905
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
906
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
907
+ usually at the expense of lower image quality.
908
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
909
+ The number of images to generate per prompt.
910
+ eta (`float`, *optional*, defaults to 0.0):
911
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
912
+ [`schedulers.DDIMScheduler`], will be ignored for others.
913
+ generator (`torch.Generator`, *optional*):
914
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
915
+ deterministic.
916
+ latents (`np.ndarray`, *optional*):
917
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
918
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
919
+ tensor will ge generated by sampling using the supplied random `generator`.
920
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
921
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
922
+ output_type (`str`, *optional*, defaults to `"pil"`):
923
+ The output format of the generate image. Choose between
924
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
925
+ return_dict (`bool`, *optional*, defaults to `True`):
926
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
927
+ plain tuple.
928
+ callback (`Callable`, *optional*):
929
+ A function that will be called every `callback_steps` steps during inference. The function will be
930
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
931
+ callback_steps (`int`, *optional*, defaults to 1):
932
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
933
+ called at every step.
934
+ Returns:
935
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
936
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
937
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
938
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
939
+ (nsfw) content, according to the `safety_checker`.
940
+ """
941
+ return self.__call__(
942
+ prompt=prompt,
943
+ negative_prompt=negative_prompt,
944
+ height=height,
945
+ width=width,
946
+ num_inference_steps=num_inference_steps,
947
+ guidance_scale=guidance_scale,
948
+ num_images_per_prompt=num_images_per_prompt,
949
+ eta=eta,
950
+ generator=generator,
951
+ latents=latents,
952
+ max_embeddings_multiples=max_embeddings_multiples,
953
+ output_type=output_type,
954
+ return_dict=return_dict,
955
+ callback=callback,
956
+ callback_steps=callback_steps,
957
+ **kwargs,
958
+ )
959
+
960
+ def img2img(
961
+ self,
962
+ image: Union[np.ndarray, PIL.Image.Image],
963
+ prompt: Union[str, List[str]],
964
+ negative_prompt: Optional[Union[str, List[str]]] = None,
965
+ strength: float = 0.8,
966
+ num_inference_steps: Optional[int] = 50,
967
+ guidance_scale: Optional[float] = 7.5,
968
+ num_images_per_prompt: Optional[int] = 1,
969
+ eta: Optional[float] = 0.0,
970
+ generator: Optional[torch.Generator] = None,
971
+ max_embeddings_multiples: Optional[int] = 3,
972
+ output_type: Optional[str] = "pil",
973
+ return_dict: bool = True,
974
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
975
+ callback_steps: int = 1,
976
+ **kwargs,
977
+ ):
978
+ r"""
979
+ Function for image-to-image generation.
980
+ Args:
981
+ image (`np.ndarray` or `PIL.Image.Image`):
982
+ `Image`, or ndarray representing an image batch, that will be used as the starting point for the
983
+ process.
984
+ prompt (`str` or `List[str]`):
985
+ The prompt or prompts to guide the image generation.
986
+ negative_prompt (`str` or `List[str]`, *optional*):
987
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
988
+ if `guidance_scale` is less than `1`).
989
+ strength (`float`, *optional*, defaults to 0.8):
990
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
991
+ `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
992
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
993
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
994
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
995
+ num_inference_steps (`int`, *optional*, defaults to 50):
996
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
997
+ expense of slower inference. This parameter will be modulated by `strength`.
998
+ guidance_scale (`float`, *optional*, defaults to 7.5):
999
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1000
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1001
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1002
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1003
+ usually at the expense of lower image quality.
1004
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1005
+ The number of images to generate per prompt.
1006
+ eta (`float`, *optional*, defaults to 0.0):
1007
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1008
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1009
+ generator (`torch.Generator`, *optional*):
1010
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
1011
+ deterministic.
1012
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1013
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1014
+ output_type (`str`, *optional*, defaults to `"pil"`):
1015
+ The output format of the generate image. Choose between
1016
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1017
+ return_dict (`bool`, *optional*, defaults to `True`):
1018
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1019
+ plain tuple.
1020
+ callback (`Callable`, *optional*):
1021
+ A function that will be called every `callback_steps` steps during inference. The function will be
1022
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
1023
+ callback_steps (`int`, *optional*, defaults to 1):
1024
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1025
+ called at every step.
1026
+ Returns:
1027
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1028
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1029
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1030
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1031
+ (nsfw) content, according to the `safety_checker`.
1032
+ """
1033
+ return self.__call__(
1034
+ prompt=prompt,
1035
+ negative_prompt=negative_prompt,
1036
+ image=image,
1037
+ num_inference_steps=num_inference_steps,
1038
+ guidance_scale=guidance_scale,
1039
+ strength=strength,
1040
+ num_images_per_prompt=num_images_per_prompt,
1041
+ eta=eta,
1042
+ generator=generator,
1043
+ max_embeddings_multiples=max_embeddings_multiples,
1044
+ output_type=output_type,
1045
+ return_dict=return_dict,
1046
+ callback=callback,
1047
+ callback_steps=callback_steps,
1048
+ **kwargs,
1049
+ )
1050
+
1051
+ def inpaint(
1052
+ self,
1053
+ image: Union[np.ndarray, PIL.Image.Image],
1054
+ mask_image: Union[np.ndarray, PIL.Image.Image],
1055
+ prompt: Union[str, List[str]],
1056
+ negative_prompt: Optional[Union[str, List[str]]] = None,
1057
+ strength: float = 0.8,
1058
+ num_inference_steps: Optional[int] = 50,
1059
+ guidance_scale: Optional[float] = 7.5,
1060
+ num_images_per_prompt: Optional[int] = 1,
1061
+ eta: Optional[float] = 0.0,
1062
+ generator: Optional[torch.Generator] = None,
1063
+ max_embeddings_multiples: Optional[int] = 3,
1064
+ output_type: Optional[str] = "pil",
1065
+ return_dict: bool = True,
1066
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
1067
+ callback_steps: int = 1,
1068
+ **kwargs,
1069
+ ):
1070
+ r"""
1071
+ Function for inpaint.
1072
+ Args:
1073
+ image (`np.ndarray` or `PIL.Image.Image`):
1074
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
1075
+ process. This is the image whose masked region will be inpainted.
1076
+ mask_image (`np.ndarray` or `PIL.Image.Image`):
1077
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
1078
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
1079
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
1080
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
1081
+ prompt (`str` or `List[str]`):
1082
+ The prompt or prompts to guide the image generation.
1083
+ negative_prompt (`str` or `List[str]`, *optional*):
1084
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
1085
+ if `guidance_scale` is less than `1`).
1086
+ strength (`float`, *optional*, defaults to 0.8):
1087
+ Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength`
1088
+ is 1, the denoising process will be run on the masked area for the full number of iterations specified
1089
+ in `num_inference_steps`. `image` will be used as a reference for the masked area, adding more
1090
+ noise to that region the larger the `strength`. If `strength` is 0, no inpainting will occur.
1091
+ num_inference_steps (`int`, *optional*, defaults to 50):
1092
+ The reference number of denoising steps. More denoising steps usually lead to a higher quality image at
1093
+ the expense of slower inference. This parameter will be modulated by `strength`, as explained above.
1094
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1095
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1096
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1097
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1098
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1099
+ usually at the expense of lower image quality.
1100
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1101
+ The number of images to generate per prompt.
1102
+ eta (`float`, *optional*, defaults to 0.0):
1103
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1104
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1105
+ generator (`torch.Generator`, *optional*):
1106
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
1107
+ deterministic.
1108
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1109
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1110
+ output_type (`str`, *optional*, defaults to `"pil"`):
1111
+ The output format of the generate image. Choose between
1112
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1113
+ return_dict (`bool`, *optional*, defaults to `True`):
1114
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1115
+ plain tuple.
1116
+ callback (`Callable`, *optional*):
1117
+ A function that will be called every `callback_steps` steps during inference. The function will be
1118
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
1119
+ callback_steps (`int`, *optional*, defaults to 1):
1120
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1121
+ called at every step.
1122
+ Returns:
1123
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1124
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1125
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1126
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1127
+ (nsfw) content, according to the `safety_checker`.
1128
+ """
1129
+ return self.__call__(
1130
+ prompt=prompt,
1131
+ negative_prompt=negative_prompt,
1132
+ image=image,
1133
+ mask_image=mask_image,
1134
+ num_inference_steps=num_inference_steps,
1135
+ guidance_scale=guidance_scale,
1136
+ strength=strength,
1137
+ num_images_per_prompt=num_images_per_prompt,
1138
+ eta=eta,
1139
+ generator=generator,
1140
+ max_embeddings_multiples=max_embeddings_multiples,
1141
+ output_type=output_type,
1142
+ return_dict=return_dict,
1143
+ callback=callback,
1144
+ callback_steps=callback_steps,
1145
+ **kwargs,
1146
+ )
v0.19.2/magic_mix.py ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Union
2
+
3
+ import torch
4
+ from PIL import Image
5
+ from torchvision import transforms as tfms
6
+ from tqdm.auto import tqdm
7
+ from transformers import CLIPTextModel, CLIPTokenizer
8
+
9
+ from diffusers import (
10
+ AutoencoderKL,
11
+ DDIMScheduler,
12
+ DiffusionPipeline,
13
+ LMSDiscreteScheduler,
14
+ PNDMScheduler,
15
+ UNet2DConditionModel,
16
+ )
17
+
18
+
19
+ class MagicMixPipeline(DiffusionPipeline):
20
+ def __init__(
21
+ self,
22
+ vae: AutoencoderKL,
23
+ text_encoder: CLIPTextModel,
24
+ tokenizer: CLIPTokenizer,
25
+ unet: UNet2DConditionModel,
26
+ scheduler: Union[PNDMScheduler, LMSDiscreteScheduler, DDIMScheduler],
27
+ ):
28
+ super().__init__()
29
+
30
+ self.register_modules(vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, scheduler=scheduler)
31
+
32
+ # convert PIL image to latents
33
+ def encode(self, img):
34
+ with torch.no_grad():
35
+ latent = self.vae.encode(tfms.ToTensor()(img).unsqueeze(0).to(self.device) * 2 - 1)
36
+ latent = 0.18215 * latent.latent_dist.sample()
37
+ return latent
38
+
39
+ # convert latents to PIL image
40
+ def decode(self, latent):
41
+ latent = (1 / 0.18215) * latent
42
+ with torch.no_grad():
43
+ img = self.vae.decode(latent).sample
44
+ img = (img / 2 + 0.5).clamp(0, 1)
45
+ img = img.detach().cpu().permute(0, 2, 3, 1).numpy()
46
+ img = (img * 255).round().astype("uint8")
47
+ return Image.fromarray(img[0])
48
+
49
+ # convert prompt into text embeddings, also unconditional embeddings
50
+ def prep_text(self, prompt):
51
+ text_input = self.tokenizer(
52
+ prompt,
53
+ padding="max_length",
54
+ max_length=self.tokenizer.model_max_length,
55
+ truncation=True,
56
+ return_tensors="pt",
57
+ )
58
+
59
+ text_embedding = self.text_encoder(text_input.input_ids.to(self.device))[0]
60
+
61
+ uncond_input = self.tokenizer(
62
+ "",
63
+ padding="max_length",
64
+ max_length=self.tokenizer.model_max_length,
65
+ truncation=True,
66
+ return_tensors="pt",
67
+ )
68
+
69
+ uncond_embedding = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
70
+
71
+ return torch.cat([uncond_embedding, text_embedding])
72
+
73
+ def __call__(
74
+ self,
75
+ img: Image.Image,
76
+ prompt: str,
77
+ kmin: float = 0.3,
78
+ kmax: float = 0.6,
79
+ mix_factor: float = 0.5,
80
+ seed: int = 42,
81
+ steps: int = 50,
82
+ guidance_scale: float = 7.5,
83
+ ) -> Image.Image:
84
+ tmin = steps - int(kmin * steps)
85
+ tmax = steps - int(kmax * steps)
86
+
87
+ text_embeddings = self.prep_text(prompt)
88
+
89
+ self.scheduler.set_timesteps(steps)
90
+
91
+ width, height = img.size
92
+ encoded = self.encode(img)
93
+
94
+ torch.manual_seed(seed)
95
+ noise = torch.randn(
96
+ (1, self.unet.config.in_channels, height // 8, width // 8),
97
+ ).to(self.device)
98
+
99
+ latents = self.scheduler.add_noise(
100
+ encoded,
101
+ noise,
102
+ timesteps=self.scheduler.timesteps[tmax],
103
+ )
104
+
105
+ input = torch.cat([latents] * 2)
106
+
107
+ input = self.scheduler.scale_model_input(input, self.scheduler.timesteps[tmax])
108
+
109
+ with torch.no_grad():
110
+ pred = self.unet(
111
+ input,
112
+ self.scheduler.timesteps[tmax],
113
+ encoder_hidden_states=text_embeddings,
114
+ ).sample
115
+
116
+ pred_uncond, pred_text = pred.chunk(2)
117
+ pred = pred_uncond + guidance_scale * (pred_text - pred_uncond)
118
+
119
+ latents = self.scheduler.step(pred, self.scheduler.timesteps[tmax], latents).prev_sample
120
+
121
+ for i, t in enumerate(tqdm(self.scheduler.timesteps)):
122
+ if i > tmax:
123
+ if i < tmin: # layout generation phase
124
+ orig_latents = self.scheduler.add_noise(
125
+ encoded,
126
+ noise,
127
+ timesteps=t,
128
+ )
129
+
130
+ input = (mix_factor * latents) + (
131
+ 1 - mix_factor
132
+ ) * orig_latents # interpolating between layout noise and conditionally generated noise to preserve layout sematics
133
+ input = torch.cat([input] * 2)
134
+
135
+ else: # content generation phase
136
+ input = torch.cat([latents] * 2)
137
+
138
+ input = self.scheduler.scale_model_input(input, t)
139
+
140
+ with torch.no_grad():
141
+ pred = self.unet(
142
+ input,
143
+ t,
144
+ encoder_hidden_states=text_embeddings,
145
+ ).sample
146
+
147
+ pred_uncond, pred_text = pred.chunk(2)
148
+ pred = pred_uncond + guidance_scale * (pred_text - pred_uncond)
149
+
150
+ latents = self.scheduler.step(pred, t, latents).prev_sample
151
+
152
+ return self.decode(latents)
v0.19.2/mixture_canvas.py ADDED
@@ -0,0 +1,503 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ from copy import deepcopy
3
+ from dataclasses import asdict, dataclass
4
+ from enum import Enum
5
+ from typing import List, Optional, Union
6
+
7
+ import numpy as np
8
+ import torch
9
+ from numpy import exp, pi, sqrt
10
+ from torchvision.transforms.functional import resize
11
+ from tqdm.auto import tqdm
12
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
13
+
14
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
15
+ from diffusers.pipeline_utils import DiffusionPipeline
16
+ from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker
17
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
18
+
19
+
20
+ def preprocess_image(image):
21
+ from PIL import Image
22
+
23
+ """Preprocess an input image
24
+
25
+ Same as
26
+ https://github.com/huggingface/diffusers/blob/1138d63b519e37f0ce04e027b9f4a3261d27c628/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L44
27
+ """
28
+ w, h = image.size
29
+ w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
30
+ image = image.resize((w, h), resample=Image.LANCZOS)
31
+ image = np.array(image).astype(np.float32) / 255.0
32
+ image = image[None].transpose(0, 3, 1, 2)
33
+ image = torch.from_numpy(image)
34
+ return 2.0 * image - 1.0
35
+
36
+
37
+ @dataclass
38
+ class CanvasRegion:
39
+ """Class defining a rectangular region in the canvas"""
40
+
41
+ row_init: int # Region starting row in pixel space (included)
42
+ row_end: int # Region end row in pixel space (not included)
43
+ col_init: int # Region starting column in pixel space (included)
44
+ col_end: int # Region end column in pixel space (not included)
45
+ region_seed: int = None # Seed for random operations in this region
46
+ noise_eps: float = 0.0 # Deviation of a zero-mean gaussian noise to be applied over the latents in this region. Useful for slightly "rerolling" latents
47
+
48
+ def __post_init__(self):
49
+ # Initialize arguments if not specified
50
+ if self.region_seed is None:
51
+ self.region_seed = np.random.randint(9999999999)
52
+ # Check coordinates are non-negative
53
+ for coord in [self.row_init, self.row_end, self.col_init, self.col_end]:
54
+ if coord < 0:
55
+ raise ValueError(
56
+ f"A CanvasRegion must be defined with non-negative indices, found ({self.row_init}, {self.row_end}, {self.col_init}, {self.col_end})"
57
+ )
58
+ # Check coordinates are divisible by 8, else we end up with nasty rounding error when mapping to latent space
59
+ for coord in [self.row_init, self.row_end, self.col_init, self.col_end]:
60
+ if coord // 8 != coord / 8:
61
+ raise ValueError(
62
+ f"A CanvasRegion must be defined with locations divisible by 8, found ({self.row_init}-{self.row_end}, {self.col_init}-{self.col_end})"
63
+ )
64
+ # Check noise eps is non-negative
65
+ if self.noise_eps < 0:
66
+ raise ValueError(f"A CanvasRegion must be defined noises eps non-negative, found {self.noise_eps}")
67
+ # Compute coordinates for this region in latent space
68
+ self.latent_row_init = self.row_init // 8
69
+ self.latent_row_end = self.row_end // 8
70
+ self.latent_col_init = self.col_init // 8
71
+ self.latent_col_end = self.col_end // 8
72
+
73
+ @property
74
+ def width(self):
75
+ return self.col_end - self.col_init
76
+
77
+ @property
78
+ def height(self):
79
+ return self.row_end - self.row_init
80
+
81
+ def get_region_generator(self, device="cpu"):
82
+ """Creates a torch.Generator based on the random seed of this region"""
83
+ # Initialize region generator
84
+ return torch.Generator(device).manual_seed(self.region_seed)
85
+
86
+ @property
87
+ def __dict__(self):
88
+ return asdict(self)
89
+
90
+
91
+ class MaskModes(Enum):
92
+ """Modes in which the influence of diffuser is masked"""
93
+
94
+ CONSTANT = "constant"
95
+ GAUSSIAN = "gaussian"
96
+ QUARTIC = "quartic" # See https://en.wikipedia.org/wiki/Kernel_(statistics)
97
+
98
+
99
+ @dataclass
100
+ class DiffusionRegion(CanvasRegion):
101
+ """Abstract class defining a region where some class of diffusion process is acting"""
102
+
103
+ pass
104
+
105
+
106
+ @dataclass
107
+ class Text2ImageRegion(DiffusionRegion):
108
+ """Class defining a region where a text guided diffusion process is acting"""
109
+
110
+ prompt: str = "" # Text prompt guiding the diffuser in this region
111
+ guidance_scale: float = 7.5 # Guidance scale of the diffuser in this region. If None, randomize
112
+ mask_type: MaskModes = MaskModes.GAUSSIAN.value # Kind of weight mask applied to this region
113
+ mask_weight: float = 1.0 # Global weights multiplier of the mask
114
+ tokenized_prompt = None # Tokenized prompt
115
+ encoded_prompt = None # Encoded prompt
116
+
117
+ def __post_init__(self):
118
+ super().__post_init__()
119
+ # Mask weight cannot be negative
120
+ if self.mask_weight < 0:
121
+ raise ValueError(
122
+ f"A Text2ImageRegion must be defined with non-negative mask weight, found {self.mask_weight}"
123
+ )
124
+ # Mask type must be an actual known mask
125
+ if self.mask_type not in [e.value for e in MaskModes]:
126
+ raise ValueError(
127
+ f"A Text2ImageRegion was defined with mask {self.mask_type}, which is not an accepted mask ({[e.value for e in MaskModes]})"
128
+ )
129
+ # Randomize arguments if given as None
130
+ if self.guidance_scale is None:
131
+ self.guidance_scale = np.random.randint(5, 30)
132
+ # Clean prompt
133
+ self.prompt = re.sub(" +", " ", self.prompt).replace("\n", " ")
134
+
135
+ def tokenize_prompt(self, tokenizer):
136
+ """Tokenizes the prompt for this diffusion region using a given tokenizer"""
137
+ self.tokenized_prompt = tokenizer(
138
+ self.prompt,
139
+ padding="max_length",
140
+ max_length=tokenizer.model_max_length,
141
+ truncation=True,
142
+ return_tensors="pt",
143
+ )
144
+
145
+ def encode_prompt(self, text_encoder, device):
146
+ """Encodes the previously tokenized prompt for this diffusion region using a given encoder"""
147
+ assert self.tokenized_prompt is not None, ValueError(
148
+ "Prompt in diffusion region must be tokenized before encoding"
149
+ )
150
+ self.encoded_prompt = text_encoder(self.tokenized_prompt.input_ids.to(device))[0]
151
+
152
+
153
+ @dataclass
154
+ class Image2ImageRegion(DiffusionRegion):
155
+ """Class defining a region where an image guided diffusion process is acting"""
156
+
157
+ reference_image: torch.FloatTensor = None
158
+ strength: float = 0.8 # Strength of the image
159
+
160
+ def __post_init__(self):
161
+ super().__post_init__()
162
+ if self.reference_image is None:
163
+ raise ValueError("Must provide a reference image when creating an Image2ImageRegion")
164
+ if self.strength < 0 or self.strength > 1:
165
+ raise ValueError(f"The value of strength should in [0.0, 1.0] but is {self.strength}")
166
+ # Rescale image to region shape
167
+ self.reference_image = resize(self.reference_image, size=[self.height, self.width])
168
+
169
+ def encode_reference_image(self, encoder, device, generator, cpu_vae=False):
170
+ """Encodes the reference image for this Image2Image region into the latent space"""
171
+ # Place encoder in CPU or not following the parameter cpu_vae
172
+ if cpu_vae:
173
+ # Note here we use mean instead of sample, to avoid moving also generator to CPU, which is troublesome
174
+ self.reference_latents = encoder.cpu().encode(self.reference_image).latent_dist.mean.to(device)
175
+ else:
176
+ self.reference_latents = encoder.encode(self.reference_image.to(device)).latent_dist.sample(
177
+ generator=generator
178
+ )
179
+ self.reference_latents = 0.18215 * self.reference_latents
180
+
181
+ @property
182
+ def __dict__(self):
183
+ # This class requires special casting to dict because of the reference_image tensor. Otherwise it cannot be casted to JSON
184
+
185
+ # Get all basic fields from parent class
186
+ super_fields = {key: getattr(self, key) for key in DiffusionRegion.__dataclass_fields__.keys()}
187
+ # Pack other fields
188
+ return {**super_fields, "reference_image": self.reference_image.cpu().tolist(), "strength": self.strength}
189
+
190
+
191
+ class RerollModes(Enum):
192
+ """Modes in which the reroll regions operate"""
193
+
194
+ RESET = "reset" # Completely reset the random noise in the region
195
+ EPSILON = "epsilon" # Alter slightly the latents in the region
196
+
197
+
198
+ @dataclass
199
+ class RerollRegion(CanvasRegion):
200
+ """Class defining a rectangular canvas region in which initial latent noise will be rerolled"""
201
+
202
+ reroll_mode: RerollModes = RerollModes.RESET.value
203
+
204
+
205
+ @dataclass
206
+ class MaskWeightsBuilder:
207
+ """Auxiliary class to compute a tensor of weights for a given diffusion region"""
208
+
209
+ latent_space_dim: int # Size of the U-net latent space
210
+ nbatch: int = 1 # Batch size in the U-net
211
+
212
+ def compute_mask_weights(self, region: DiffusionRegion) -> torch.tensor:
213
+ """Computes a tensor of weights for a given diffusion region"""
214
+ MASK_BUILDERS = {
215
+ MaskModes.CONSTANT.value: self._constant_weights,
216
+ MaskModes.GAUSSIAN.value: self._gaussian_weights,
217
+ MaskModes.QUARTIC.value: self._quartic_weights,
218
+ }
219
+ return MASK_BUILDERS[region.mask_type](region)
220
+
221
+ def _constant_weights(self, region: DiffusionRegion) -> torch.tensor:
222
+ """Computes a tensor of constant for a given diffusion region"""
223
+ latent_width = region.latent_col_end - region.latent_col_init
224
+ latent_height = region.latent_row_end - region.latent_row_init
225
+ return torch.ones(self.nbatch, self.latent_space_dim, latent_height, latent_width) * region.mask_weight
226
+
227
+ def _gaussian_weights(self, region: DiffusionRegion) -> torch.tensor:
228
+ """Generates a gaussian mask of weights for tile contributions"""
229
+ latent_width = region.latent_col_end - region.latent_col_init
230
+ latent_height = region.latent_row_end - region.latent_row_init
231
+
232
+ var = 0.01
233
+ midpoint = (latent_width - 1) / 2 # -1 because index goes from 0 to latent_width - 1
234
+ x_probs = [
235
+ exp(-(x - midpoint) * (x - midpoint) / (latent_width * latent_width) / (2 * var)) / sqrt(2 * pi * var)
236
+ for x in range(latent_width)
237
+ ]
238
+ midpoint = (latent_height - 1) / 2
239
+ y_probs = [
240
+ exp(-(y - midpoint) * (y - midpoint) / (latent_height * latent_height) / (2 * var)) / sqrt(2 * pi * var)
241
+ for y in range(latent_height)
242
+ ]
243
+
244
+ weights = np.outer(y_probs, x_probs) * region.mask_weight
245
+ return torch.tile(torch.tensor(weights), (self.nbatch, self.latent_space_dim, 1, 1))
246
+
247
+ def _quartic_weights(self, region: DiffusionRegion) -> torch.tensor:
248
+ """Generates a quartic mask of weights for tile contributions
249
+
250
+ The quartic kernel has bounded support over the diffusion region, and a smooth decay to the region limits.
251
+ """
252
+ quartic_constant = 15.0 / 16.0
253
+
254
+ support = (np.array(range(region.latent_col_init, region.latent_col_end)) - region.latent_col_init) / (
255
+ region.latent_col_end - region.latent_col_init - 1
256
+ ) * 1.99 - (1.99 / 2.0)
257
+ x_probs = quartic_constant * np.square(1 - np.square(support))
258
+ support = (np.array(range(region.latent_row_init, region.latent_row_end)) - region.latent_row_init) / (
259
+ region.latent_row_end - region.latent_row_init - 1
260
+ ) * 1.99 - (1.99 / 2.0)
261
+ y_probs = quartic_constant * np.square(1 - np.square(support))
262
+
263
+ weights = np.outer(y_probs, x_probs) * region.mask_weight
264
+ return torch.tile(torch.tensor(weights), (self.nbatch, self.latent_space_dim, 1, 1))
265
+
266
+
267
+ class StableDiffusionCanvasPipeline(DiffusionPipeline):
268
+ """Stable Diffusion pipeline that mixes several diffusers in the same canvas"""
269
+
270
+ def __init__(
271
+ self,
272
+ vae: AutoencoderKL,
273
+ text_encoder: CLIPTextModel,
274
+ tokenizer: CLIPTokenizer,
275
+ unet: UNet2DConditionModel,
276
+ scheduler: Union[DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler],
277
+ safety_checker: StableDiffusionSafetyChecker,
278
+ feature_extractor: CLIPFeatureExtractor,
279
+ ):
280
+ super().__init__()
281
+ self.register_modules(
282
+ vae=vae,
283
+ text_encoder=text_encoder,
284
+ tokenizer=tokenizer,
285
+ unet=unet,
286
+ scheduler=scheduler,
287
+ safety_checker=safety_checker,
288
+ feature_extractor=feature_extractor,
289
+ )
290
+
291
+ def decode_latents(self, latents, cpu_vae=False):
292
+ """Decodes a given array of latents into pixel space"""
293
+ # scale and decode the image latents with vae
294
+ if cpu_vae:
295
+ lat = deepcopy(latents).cpu()
296
+ vae = deepcopy(self.vae).cpu()
297
+ else:
298
+ lat = latents
299
+ vae = self.vae
300
+
301
+ lat = 1 / 0.18215 * lat
302
+ image = vae.decode(lat).sample
303
+
304
+ image = (image / 2 + 0.5).clamp(0, 1)
305
+ image = image.cpu().permute(0, 2, 3, 1).numpy()
306
+
307
+ return self.numpy_to_pil(image)
308
+
309
+ def get_latest_timestep_img2img(self, num_inference_steps, strength):
310
+ """Finds the latest timesteps where an img2img strength does not impose latents anymore"""
311
+ # get the original timestep using init_timestep
312
+ offset = self.scheduler.config.get("steps_offset", 0)
313
+ init_timestep = int(num_inference_steps * (1 - strength)) + offset
314
+ init_timestep = min(init_timestep, num_inference_steps)
315
+
316
+ t_start = min(max(num_inference_steps - init_timestep + offset, 0), num_inference_steps - 1)
317
+ latest_timestep = self.scheduler.timesteps[t_start]
318
+
319
+ return latest_timestep
320
+
321
+ @torch.no_grad()
322
+ def __call__(
323
+ self,
324
+ canvas_height: int,
325
+ canvas_width: int,
326
+ regions: List[DiffusionRegion],
327
+ num_inference_steps: Optional[int] = 50,
328
+ seed: Optional[int] = 12345,
329
+ reroll_regions: Optional[List[RerollRegion]] = None,
330
+ cpu_vae: Optional[bool] = False,
331
+ decode_steps: Optional[bool] = False,
332
+ ):
333
+ if reroll_regions is None:
334
+ reroll_regions = []
335
+ batch_size = 1
336
+
337
+ if decode_steps:
338
+ steps_images = []
339
+
340
+ # Prepare scheduler
341
+ self.scheduler.set_timesteps(num_inference_steps, device=self.device)
342
+
343
+ # Split diffusion regions by their kind
344
+ text2image_regions = [region for region in regions if isinstance(region, Text2ImageRegion)]
345
+ image2image_regions = [region for region in regions if isinstance(region, Image2ImageRegion)]
346
+
347
+ # Prepare text embeddings
348
+ for region in text2image_regions:
349
+ region.tokenize_prompt(self.tokenizer)
350
+ region.encode_prompt(self.text_encoder, self.device)
351
+
352
+ # Create original noisy latents using the timesteps
353
+ latents_shape = (batch_size, self.unet.config.in_channels, canvas_height // 8, canvas_width // 8)
354
+ generator = torch.Generator(self.device).manual_seed(seed)
355
+ init_noise = torch.randn(latents_shape, generator=generator, device=self.device)
356
+
357
+ # Reset latents in seed reroll regions, if requested
358
+ for region in reroll_regions:
359
+ if region.reroll_mode == RerollModes.RESET.value:
360
+ region_shape = (
361
+ latents_shape[0],
362
+ latents_shape[1],
363
+ region.latent_row_end - region.latent_row_init,
364
+ region.latent_col_end - region.latent_col_init,
365
+ )
366
+ init_noise[
367
+ :,
368
+ :,
369
+ region.latent_row_init : region.latent_row_end,
370
+ region.latent_col_init : region.latent_col_end,
371
+ ] = torch.randn(region_shape, generator=region.get_region_generator(self.device), device=self.device)
372
+
373
+ # Apply epsilon noise to regions: first diffusion regions, then reroll regions
374
+ all_eps_rerolls = regions + [r for r in reroll_regions if r.reroll_mode == RerollModes.EPSILON.value]
375
+ for region in all_eps_rerolls:
376
+ if region.noise_eps > 0:
377
+ region_noise = init_noise[
378
+ :,
379
+ :,
380
+ region.latent_row_init : region.latent_row_end,
381
+ region.latent_col_init : region.latent_col_end,
382
+ ]
383
+ eps_noise = (
384
+ torch.randn(
385
+ region_noise.shape, generator=region.get_region_generator(self.device), device=self.device
386
+ )
387
+ * region.noise_eps
388
+ )
389
+ init_noise[
390
+ :,
391
+ :,
392
+ region.latent_row_init : region.latent_row_end,
393
+ region.latent_col_init : region.latent_col_end,
394
+ ] += eps_noise
395
+
396
+ # scale the initial noise by the standard deviation required by the scheduler
397
+ latents = init_noise * self.scheduler.init_noise_sigma
398
+
399
+ # Get unconditional embeddings for classifier free guidance in text2image regions
400
+ for region in text2image_regions:
401
+ max_length = region.tokenized_prompt.input_ids.shape[-1]
402
+ uncond_input = self.tokenizer(
403
+ [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt"
404
+ )
405
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
406
+
407
+ # For classifier free guidance, we need to do two forward passes.
408
+ # Here we concatenate the unconditional and text embeddings into a single batch
409
+ # to avoid doing two forward passes
410
+ region.encoded_prompt = torch.cat([uncond_embeddings, region.encoded_prompt])
411
+
412
+ # Prepare image latents
413
+ for region in image2image_regions:
414
+ region.encode_reference_image(self.vae, device=self.device, generator=generator)
415
+
416
+ # Prepare mask of weights for each region
417
+ mask_builder = MaskWeightsBuilder(latent_space_dim=self.unet.config.in_channels, nbatch=batch_size)
418
+ mask_weights = [mask_builder.compute_mask_weights(region).to(self.device) for region in text2image_regions]
419
+
420
+ # Diffusion timesteps
421
+ for i, t in tqdm(enumerate(self.scheduler.timesteps)):
422
+ # Diffuse each region
423
+ noise_preds_regions = []
424
+
425
+ # text2image regions
426
+ for region in text2image_regions:
427
+ region_latents = latents[
428
+ :,
429
+ :,
430
+ region.latent_row_init : region.latent_row_end,
431
+ region.latent_col_init : region.latent_col_end,
432
+ ]
433
+ # expand the latents if we are doing classifier free guidance
434
+ latent_model_input = torch.cat([region_latents] * 2)
435
+ # scale model input following scheduler rules
436
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
437
+ # predict the noise residual
438
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=region.encoded_prompt)["sample"]
439
+ # perform guidance
440
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
441
+ noise_pred_region = noise_pred_uncond + region.guidance_scale * (noise_pred_text - noise_pred_uncond)
442
+ noise_preds_regions.append(noise_pred_region)
443
+
444
+ # Merge noise predictions for all tiles
445
+ noise_pred = torch.zeros(latents.shape, device=self.device)
446
+ contributors = torch.zeros(latents.shape, device=self.device)
447
+ # Add each tile contribution to overall latents
448
+ for region, noise_pred_region, mask_weights_region in zip(
449
+ text2image_regions, noise_preds_regions, mask_weights
450
+ ):
451
+ noise_pred[
452
+ :,
453
+ :,
454
+ region.latent_row_init : region.latent_row_end,
455
+ region.latent_col_init : region.latent_col_end,
456
+ ] += (
457
+ noise_pred_region * mask_weights_region
458
+ )
459
+ contributors[
460
+ :,
461
+ :,
462
+ region.latent_row_init : region.latent_row_end,
463
+ region.latent_col_init : region.latent_col_end,
464
+ ] += mask_weights_region
465
+ # Average overlapping areas with more than 1 contributor
466
+ noise_pred /= contributors
467
+ noise_pred = torch.nan_to_num(
468
+ noise_pred
469
+ ) # Replace NaNs by zeros: NaN can appear if a position is not covered by any DiffusionRegion
470
+
471
+ # compute the previous noisy sample x_t -> x_t-1
472
+ latents = self.scheduler.step(noise_pred, t, latents).prev_sample
473
+
474
+ # Image2Image regions: override latents generated by the scheduler
475
+ for region in image2image_regions:
476
+ influence_step = self.get_latest_timestep_img2img(num_inference_steps, region.strength)
477
+ # Only override in the timesteps before the last influence step of the image (given by its strength)
478
+ if t > influence_step:
479
+ timestep = t.repeat(batch_size)
480
+ region_init_noise = init_noise[
481
+ :,
482
+ :,
483
+ region.latent_row_init : region.latent_row_end,
484
+ region.latent_col_init : region.latent_col_end,
485
+ ]
486
+ region_latents = self.scheduler.add_noise(region.reference_latents, region_init_noise, timestep)
487
+ latents[
488
+ :,
489
+ :,
490
+ region.latent_row_init : region.latent_row_end,
491
+ region.latent_col_init : region.latent_col_end,
492
+ ] = region_latents
493
+
494
+ if decode_steps:
495
+ steps_images.append(self.decode_latents(latents, cpu_vae))
496
+
497
+ # scale and decode the image latents with vae
498
+ image = self.decode_latents(latents, cpu_vae)
499
+
500
+ output = {"images": image}
501
+ if decode_steps:
502
+ output = {**output, "steps_images": steps_images}
503
+ return output
v0.19.2/mixture_tiling.py ADDED
@@ -0,0 +1,405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from copy import deepcopy
3
+ from enum import Enum
4
+ from typing import List, Optional, Tuple, Union
5
+
6
+ import torch
7
+ from tqdm.auto import tqdm
8
+
9
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
10
+ from diffusers.pipeline_utils import DiffusionPipeline
11
+ from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker
12
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
13
+ from diffusers.utils import logging
14
+
15
+
16
+ try:
17
+ from ligo.segments import segment
18
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
19
+ except ImportError:
20
+ raise ImportError("Please install transformers and ligo-segments to use the mixture pipeline")
21
+
22
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
23
+
24
+ EXAMPLE_DOC_STRING = """
25
+ Examples:
26
+ ```py
27
+ >>> from diffusers import LMSDiscreteScheduler, DiffusionPipeline
28
+
29
+ >>> scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
30
+ >>> pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler, custom_pipeline="mixture_tiling")
31
+ >>> pipeline.to("cuda")
32
+
33
+ >>> image = pipeline(
34
+ >>> prompt=[[
35
+ >>> "A charming house in the countryside, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece",
36
+ >>> "A dirt road in the countryside crossing pastures, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece",
37
+ >>> "An old and rusty giant robot lying on a dirt road, by jakub rozalski, dark sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece"
38
+ >>> ]],
39
+ >>> tile_height=640,
40
+ >>> tile_width=640,
41
+ >>> tile_row_overlap=0,
42
+ >>> tile_col_overlap=256,
43
+ >>> guidance_scale=8,
44
+ >>> seed=7178915308,
45
+ >>> num_inference_steps=50,
46
+ >>> )["images"][0]
47
+ ```
48
+ """
49
+
50
+
51
+ def _tile2pixel_indices(tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap):
52
+ """Given a tile row and column numbers returns the range of pixels affected by that tiles in the overall image
53
+
54
+ Returns a tuple with:
55
+ - Starting coordinates of rows in pixel space
56
+ - Ending coordinates of rows in pixel space
57
+ - Starting coordinates of columns in pixel space
58
+ - Ending coordinates of columns in pixel space
59
+ """
60
+ px_row_init = 0 if tile_row == 0 else tile_row * (tile_height - tile_row_overlap)
61
+ px_row_end = px_row_init + tile_height
62
+ px_col_init = 0 if tile_col == 0 else tile_col * (tile_width - tile_col_overlap)
63
+ px_col_end = px_col_init + tile_width
64
+ return px_row_init, px_row_end, px_col_init, px_col_end
65
+
66
+
67
+ def _pixel2latent_indices(px_row_init, px_row_end, px_col_init, px_col_end):
68
+ """Translates coordinates in pixel space to coordinates in latent space"""
69
+ return px_row_init // 8, px_row_end // 8, px_col_init // 8, px_col_end // 8
70
+
71
+
72
+ def _tile2latent_indices(tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap):
73
+ """Given a tile row and column numbers returns the range of latents affected by that tiles in the overall image
74
+
75
+ Returns a tuple with:
76
+ - Starting coordinates of rows in latent space
77
+ - Ending coordinates of rows in latent space
78
+ - Starting coordinates of columns in latent space
79
+ - Ending coordinates of columns in latent space
80
+ """
81
+ px_row_init, px_row_end, px_col_init, px_col_end = _tile2pixel_indices(
82
+ tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap
83
+ )
84
+ return _pixel2latent_indices(px_row_init, px_row_end, px_col_init, px_col_end)
85
+
86
+
87
+ def _tile2latent_exclusive_indices(
88
+ tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap, rows, columns
89
+ ):
90
+ """Given a tile row and column numbers returns the range of latents affected only by that tile in the overall image
91
+
92
+ Returns a tuple with:
93
+ - Starting coordinates of rows in latent space
94
+ - Ending coordinates of rows in latent space
95
+ - Starting coordinates of columns in latent space
96
+ - Ending coordinates of columns in latent space
97
+ """
98
+ row_init, row_end, col_init, col_end = _tile2latent_indices(
99
+ tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap
100
+ )
101
+ row_segment = segment(row_init, row_end)
102
+ col_segment = segment(col_init, col_end)
103
+ # Iterate over the rest of tiles, clipping the region for the current tile
104
+ for row in range(rows):
105
+ for column in range(columns):
106
+ if row != tile_row and column != tile_col:
107
+ clip_row_init, clip_row_end, clip_col_init, clip_col_end = _tile2latent_indices(
108
+ row, column, tile_width, tile_height, tile_row_overlap, tile_col_overlap
109
+ )
110
+ row_segment = row_segment - segment(clip_row_init, clip_row_end)
111
+ col_segment = col_segment - segment(clip_col_init, clip_col_end)
112
+ # return row_init, row_end, col_init, col_end
113
+ return row_segment[0], row_segment[1], col_segment[0], col_segment[1]
114
+
115
+
116
+ class StableDiffusionExtrasMixin:
117
+ """Mixin providing additional convenience method to Stable Diffusion pipelines"""
118
+
119
+ def decode_latents(self, latents, cpu_vae=False):
120
+ """Decodes a given array of latents into pixel space"""
121
+ # scale and decode the image latents with vae
122
+ if cpu_vae:
123
+ lat = deepcopy(latents).cpu()
124
+ vae = deepcopy(self.vae).cpu()
125
+ else:
126
+ lat = latents
127
+ vae = self.vae
128
+
129
+ lat = 1 / 0.18215 * lat
130
+ image = vae.decode(lat).sample
131
+
132
+ image = (image / 2 + 0.5).clamp(0, 1)
133
+ image = image.cpu().permute(0, 2, 3, 1).numpy()
134
+
135
+ return self.numpy_to_pil(image)
136
+
137
+
138
+ class StableDiffusionTilingPipeline(DiffusionPipeline, StableDiffusionExtrasMixin):
139
+ def __init__(
140
+ self,
141
+ vae: AutoencoderKL,
142
+ text_encoder: CLIPTextModel,
143
+ tokenizer: CLIPTokenizer,
144
+ unet: UNet2DConditionModel,
145
+ scheduler: Union[DDIMScheduler, PNDMScheduler],
146
+ safety_checker: StableDiffusionSafetyChecker,
147
+ feature_extractor: CLIPFeatureExtractor,
148
+ ):
149
+ super().__init__()
150
+ self.register_modules(
151
+ vae=vae,
152
+ text_encoder=text_encoder,
153
+ tokenizer=tokenizer,
154
+ unet=unet,
155
+ scheduler=scheduler,
156
+ safety_checker=safety_checker,
157
+ feature_extractor=feature_extractor,
158
+ )
159
+
160
+ class SeedTilesMode(Enum):
161
+ """Modes in which the latents of a particular tile can be re-seeded"""
162
+
163
+ FULL = "full"
164
+ EXCLUSIVE = "exclusive"
165
+
166
+ @torch.no_grad()
167
+ def __call__(
168
+ self,
169
+ prompt: Union[str, List[List[str]]],
170
+ num_inference_steps: Optional[int] = 50,
171
+ guidance_scale: Optional[float] = 7.5,
172
+ eta: Optional[float] = 0.0,
173
+ seed: Optional[int] = None,
174
+ tile_height: Optional[int] = 512,
175
+ tile_width: Optional[int] = 512,
176
+ tile_row_overlap: Optional[int] = 256,
177
+ tile_col_overlap: Optional[int] = 256,
178
+ guidance_scale_tiles: Optional[List[List[float]]] = None,
179
+ seed_tiles: Optional[List[List[int]]] = None,
180
+ seed_tiles_mode: Optional[Union[str, List[List[str]]]] = "full",
181
+ seed_reroll_regions: Optional[List[Tuple[int, int, int, int, int]]] = None,
182
+ cpu_vae: Optional[bool] = False,
183
+ ):
184
+ r"""
185
+ Function to run the diffusion pipeline with tiling support.
186
+
187
+ Args:
188
+ prompt: either a single string (no tiling) or a list of lists with all the prompts to use (one list for each row of tiles). This will also define the tiling structure.
189
+ num_inference_steps: number of diffusions steps.
190
+ guidance_scale: classifier-free guidance.
191
+ seed: general random seed to initialize latents.
192
+ tile_height: height in pixels of each grid tile.
193
+ tile_width: width in pixels of each grid tile.
194
+ tile_row_overlap: number of overlap pixels between tiles in consecutive rows.
195
+ tile_col_overlap: number of overlap pixels between tiles in consecutive columns.
196
+ guidance_scale_tiles: specific weights for classifier-free guidance in each tile.
197
+ guidance_scale_tiles: specific weights for classifier-free guidance in each tile. If None, the value provided in guidance_scale will be used.
198
+ seed_tiles: specific seeds for the initialization latents in each tile. These will override the latents generated for the whole canvas using the standard seed parameter.
199
+ seed_tiles_mode: either "full" "exclusive". If "full", all the latents affected by the tile be overriden. If "exclusive", only the latents that are affected exclusively by this tile (and no other tiles) will be overrriden.
200
+ seed_reroll_regions: a list of tuples in the form (start row, end row, start column, end column, seed) defining regions in pixel space for which the latents will be overriden using the given seed. Takes priority over seed_tiles.
201
+ cpu_vae: the decoder from latent space to pixel space can require too mucho GPU RAM for large images. If you find out of memory errors at the end of the generation process, try setting this parameter to True to run the decoder in CPU. Slower, but should run without memory issues.
202
+
203
+ Examples:
204
+
205
+ Returns:
206
+ A PIL image with the generated image.
207
+
208
+ """
209
+ if not isinstance(prompt, list) or not all(isinstance(row, list) for row in prompt):
210
+ raise ValueError(f"`prompt` has to be a list of lists but is {type(prompt)}")
211
+ grid_rows = len(prompt)
212
+ grid_cols = len(prompt[0])
213
+ if not all(len(row) == grid_cols for row in prompt):
214
+ raise ValueError("All prompt rows must have the same number of prompt columns")
215
+ if not isinstance(seed_tiles_mode, str) and (
216
+ not isinstance(seed_tiles_mode, list) or not all(isinstance(row, list) for row in seed_tiles_mode)
217
+ ):
218
+ raise ValueError(f"`seed_tiles_mode` has to be a string or list of lists but is {type(prompt)}")
219
+ if isinstance(seed_tiles_mode, str):
220
+ seed_tiles_mode = [[seed_tiles_mode for _ in range(len(row))] for row in prompt]
221
+
222
+ modes = [mode.value for mode in self.SeedTilesMode]
223
+ if any(mode not in modes for row in seed_tiles_mode for mode in row):
224
+ raise ValueError(f"Seed tiles mode must be one of {modes}")
225
+ if seed_reroll_regions is None:
226
+ seed_reroll_regions = []
227
+ batch_size = 1
228
+
229
+ # create original noisy latents using the timesteps
230
+ height = tile_height + (grid_rows - 1) * (tile_height - tile_row_overlap)
231
+ width = tile_width + (grid_cols - 1) * (tile_width - tile_col_overlap)
232
+ latents_shape = (batch_size, self.unet.config.in_channels, height // 8, width // 8)
233
+ generator = torch.Generator("cuda").manual_seed(seed)
234
+ latents = torch.randn(latents_shape, generator=generator, device=self.device)
235
+
236
+ # overwrite latents for specific tiles if provided
237
+ if seed_tiles is not None:
238
+ for row in range(grid_rows):
239
+ for col in range(grid_cols):
240
+ if (seed_tile := seed_tiles[row][col]) is not None:
241
+ mode = seed_tiles_mode[row][col]
242
+ if mode == self.SeedTilesMode.FULL.value:
243
+ row_init, row_end, col_init, col_end = _tile2latent_indices(
244
+ row, col, tile_width, tile_height, tile_row_overlap, tile_col_overlap
245
+ )
246
+ else:
247
+ row_init, row_end, col_init, col_end = _tile2latent_exclusive_indices(
248
+ row,
249
+ col,
250
+ tile_width,
251
+ tile_height,
252
+ tile_row_overlap,
253
+ tile_col_overlap,
254
+ grid_rows,
255
+ grid_cols,
256
+ )
257
+ tile_generator = torch.Generator("cuda").manual_seed(seed_tile)
258
+ tile_shape = (latents_shape[0], latents_shape[1], row_end - row_init, col_end - col_init)
259
+ latents[:, :, row_init:row_end, col_init:col_end] = torch.randn(
260
+ tile_shape, generator=tile_generator, device=self.device
261
+ )
262
+
263
+ # overwrite again for seed reroll regions
264
+ for row_init, row_end, col_init, col_end, seed_reroll in seed_reroll_regions:
265
+ row_init, row_end, col_init, col_end = _pixel2latent_indices(
266
+ row_init, row_end, col_init, col_end
267
+ ) # to latent space coordinates
268
+ reroll_generator = torch.Generator("cuda").manual_seed(seed_reroll)
269
+ region_shape = (latents_shape[0], latents_shape[1], row_end - row_init, col_end - col_init)
270
+ latents[:, :, row_init:row_end, col_init:col_end] = torch.randn(
271
+ region_shape, generator=reroll_generator, device=self.device
272
+ )
273
+
274
+ # Prepare scheduler
275
+ accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
276
+ extra_set_kwargs = {}
277
+ if accepts_offset:
278
+ extra_set_kwargs["offset"] = 1
279
+ self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
280
+ # if we use LMSDiscreteScheduler, let's make sure latents are multiplied by sigmas
281
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
282
+ latents = latents * self.scheduler.sigmas[0]
283
+
284
+ # get prompts text embeddings
285
+ text_input = [
286
+ [
287
+ self.tokenizer(
288
+ col,
289
+ padding="max_length",
290
+ max_length=self.tokenizer.model_max_length,
291
+ truncation=True,
292
+ return_tensors="pt",
293
+ )
294
+ for col in row
295
+ ]
296
+ for row in prompt
297
+ ]
298
+ text_embeddings = [[self.text_encoder(col.input_ids.to(self.device))[0] for col in row] for row in text_input]
299
+
300
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
301
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
302
+ # corresponds to doing no classifier free guidance.
303
+ do_classifier_free_guidance = guidance_scale > 1.0 # TODO: also active if any tile has guidance scale
304
+ # get unconditional embeddings for classifier free guidance
305
+ if do_classifier_free_guidance:
306
+ for i in range(grid_rows):
307
+ for j in range(grid_cols):
308
+ max_length = text_input[i][j].input_ids.shape[-1]
309
+ uncond_input = self.tokenizer(
310
+ [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt"
311
+ )
312
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
313
+
314
+ # For classifier free guidance, we need to do two forward passes.
315
+ # Here we concatenate the unconditional and text embeddings into a single batch
316
+ # to avoid doing two forward passes
317
+ text_embeddings[i][j] = torch.cat([uncond_embeddings, text_embeddings[i][j]])
318
+
319
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
320
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
321
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
322
+ # and should be between [0, 1]
323
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
324
+ extra_step_kwargs = {}
325
+ if accepts_eta:
326
+ extra_step_kwargs["eta"] = eta
327
+
328
+ # Mask for tile weights strenght
329
+ tile_weights = self._gaussian_weights(tile_width, tile_height, batch_size)
330
+
331
+ # Diffusion timesteps
332
+ for i, t in tqdm(enumerate(self.scheduler.timesteps)):
333
+ # Diffuse each tile
334
+ noise_preds = []
335
+ for row in range(grid_rows):
336
+ noise_preds_row = []
337
+ for col in range(grid_cols):
338
+ px_row_init, px_row_end, px_col_init, px_col_end = _tile2latent_indices(
339
+ row, col, tile_width, tile_height, tile_row_overlap, tile_col_overlap
340
+ )
341
+ tile_latents = latents[:, :, px_row_init:px_row_end, px_col_init:px_col_end]
342
+ # expand the latents if we are doing classifier free guidance
343
+ latent_model_input = torch.cat([tile_latents] * 2) if do_classifier_free_guidance else tile_latents
344
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
345
+ # predict the noise residual
346
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings[row][col])[
347
+ "sample"
348
+ ]
349
+ # perform guidance
350
+ if do_classifier_free_guidance:
351
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
352
+ guidance = (
353
+ guidance_scale
354
+ if guidance_scale_tiles is None or guidance_scale_tiles[row][col] is None
355
+ else guidance_scale_tiles[row][col]
356
+ )
357
+ noise_pred_tile = noise_pred_uncond + guidance * (noise_pred_text - noise_pred_uncond)
358
+ noise_preds_row.append(noise_pred_tile)
359
+ noise_preds.append(noise_preds_row)
360
+ # Stitch noise predictions for all tiles
361
+ noise_pred = torch.zeros(latents.shape, device=self.device)
362
+ contributors = torch.zeros(latents.shape, device=self.device)
363
+ # Add each tile contribution to overall latents
364
+ for row in range(grid_rows):
365
+ for col in range(grid_cols):
366
+ px_row_init, px_row_end, px_col_init, px_col_end = _tile2latent_indices(
367
+ row, col, tile_width, tile_height, tile_row_overlap, tile_col_overlap
368
+ )
369
+ noise_pred[:, :, px_row_init:px_row_end, px_col_init:px_col_end] += (
370
+ noise_preds[row][col] * tile_weights
371
+ )
372
+ contributors[:, :, px_row_init:px_row_end, px_col_init:px_col_end] += tile_weights
373
+ # Average overlapping areas with more than 1 contributor
374
+ noise_pred /= contributors
375
+
376
+ # compute the previous noisy sample x_t -> x_t-1
377
+ latents = self.scheduler.step(noise_pred, t, latents).prev_sample
378
+
379
+ # scale and decode the image latents with vae
380
+ image = self.decode_latents(latents, cpu_vae)
381
+
382
+ return {"images": image}
383
+
384
+ def _gaussian_weights(self, tile_width, tile_height, nbatches):
385
+ """Generates a gaussian mask of weights for tile contributions"""
386
+ import numpy as np
387
+ from numpy import exp, pi, sqrt
388
+
389
+ latent_width = tile_width // 8
390
+ latent_height = tile_height // 8
391
+
392
+ var = 0.01
393
+ midpoint = (latent_width - 1) / 2 # -1 because index goes from 0 to latent_width - 1
394
+ x_probs = [
395
+ exp(-(x - midpoint) * (x - midpoint) / (latent_width * latent_width) / (2 * var)) / sqrt(2 * pi * var)
396
+ for x in range(latent_width)
397
+ ]
398
+ midpoint = latent_height / 2
399
+ y_probs = [
400
+ exp(-(y - midpoint) * (y - midpoint) / (latent_height * latent_height) / (2 * var)) / sqrt(2 * pi * var)
401
+ for y in range(latent_height)
402
+ ]
403
+
404
+ weights = np.outer(y_probs, x_probs)
405
+ return torch.tile(torch.tensor(weights, device=self.device), (nbatches, self.unet.config.in_channels, 1, 1))
v0.19.2/multilingual_stable_diffusion.py ADDED
@@ -0,0 +1,436 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import Callable, List, Optional, Union
3
+
4
+ import torch
5
+ from transformers import (
6
+ CLIPImageProcessor,
7
+ CLIPTextModel,
8
+ CLIPTokenizer,
9
+ MBart50TokenizerFast,
10
+ MBartForConditionalGeneration,
11
+ pipeline,
12
+ )
13
+
14
+ from diffusers import DiffusionPipeline
15
+ from diffusers.configuration_utils import FrozenDict
16
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
17
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
18
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
19
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
20
+ from diffusers.utils import deprecate, logging
21
+
22
+
23
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
24
+
25
+
26
+ def detect_language(pipe, prompt, batch_size):
27
+ """helper function to detect language(s) of prompt"""
28
+
29
+ if batch_size == 1:
30
+ preds = pipe(prompt, top_k=1, truncation=True, max_length=128)
31
+ return preds[0]["label"]
32
+ else:
33
+ detected_languages = []
34
+ for p in prompt:
35
+ preds = pipe(p, top_k=1, truncation=True, max_length=128)
36
+ detected_languages.append(preds[0]["label"])
37
+
38
+ return detected_languages
39
+
40
+
41
+ def translate_prompt(prompt, translation_tokenizer, translation_model, device):
42
+ """helper function to translate prompt to English"""
43
+
44
+ encoded_prompt = translation_tokenizer(prompt, return_tensors="pt").to(device)
45
+ generated_tokens = translation_model.generate(**encoded_prompt, max_new_tokens=1000)
46
+ en_trans = translation_tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
47
+
48
+ return en_trans[0]
49
+
50
+
51
+ class MultilingualStableDiffusion(DiffusionPipeline):
52
+ r"""
53
+ Pipeline for text-to-image generation using Stable Diffusion in different languages.
54
+
55
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
56
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
57
+
58
+ Args:
59
+ detection_pipeline ([`pipeline`]):
60
+ Transformers pipeline to detect prompt's language.
61
+ translation_model ([`MBartForConditionalGeneration`]):
62
+ Model to translate prompt to English, if necessary. Please refer to the
63
+ [model card](https://huggingface.co/docs/transformers/model_doc/mbart) for details.
64
+ translation_tokenizer ([`MBart50TokenizerFast`]):
65
+ Tokenizer of the translation model.
66
+ vae ([`AutoencoderKL`]):
67
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
68
+ text_encoder ([`CLIPTextModel`]):
69
+ Frozen text-encoder. Stable Diffusion uses the text portion of
70
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
71
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
72
+ tokenizer (`CLIPTokenizer`):
73
+ Tokenizer of class
74
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
75
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
76
+ scheduler ([`SchedulerMixin`]):
77
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
78
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
79
+ safety_checker ([`StableDiffusionSafetyChecker`]):
80
+ Classification module that estimates whether generated images could be considered offensive or harmful.
81
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
82
+ feature_extractor ([`CLIPImageProcessor`]):
83
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
84
+ """
85
+
86
+ def __init__(
87
+ self,
88
+ detection_pipeline: pipeline,
89
+ translation_model: MBartForConditionalGeneration,
90
+ translation_tokenizer: MBart50TokenizerFast,
91
+ vae: AutoencoderKL,
92
+ text_encoder: CLIPTextModel,
93
+ tokenizer: CLIPTokenizer,
94
+ unet: UNet2DConditionModel,
95
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
96
+ safety_checker: StableDiffusionSafetyChecker,
97
+ feature_extractor: CLIPImageProcessor,
98
+ ):
99
+ super().__init__()
100
+
101
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
102
+ deprecation_message = (
103
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
104
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
105
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
106
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
107
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
108
+ " file"
109
+ )
110
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
111
+ new_config = dict(scheduler.config)
112
+ new_config["steps_offset"] = 1
113
+ scheduler._internal_dict = FrozenDict(new_config)
114
+
115
+ if safety_checker is None:
116
+ logger.warning(
117
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
118
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
119
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
120
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
121
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
122
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
123
+ )
124
+
125
+ self.register_modules(
126
+ detection_pipeline=detection_pipeline,
127
+ translation_model=translation_model,
128
+ translation_tokenizer=translation_tokenizer,
129
+ vae=vae,
130
+ text_encoder=text_encoder,
131
+ tokenizer=tokenizer,
132
+ unet=unet,
133
+ scheduler=scheduler,
134
+ safety_checker=safety_checker,
135
+ feature_extractor=feature_extractor,
136
+ )
137
+
138
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
139
+ r"""
140
+ Enable sliced attention computation.
141
+
142
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
143
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
144
+
145
+ Args:
146
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
147
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
148
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
149
+ `attention_head_dim` must be a multiple of `slice_size`.
150
+ """
151
+ if slice_size == "auto":
152
+ # half the attention head size is usually a good trade-off between
153
+ # speed and memory
154
+ slice_size = self.unet.config.attention_head_dim // 2
155
+ self.unet.set_attention_slice(slice_size)
156
+
157
+ def disable_attention_slicing(self):
158
+ r"""
159
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
160
+ back to computing attention in one step.
161
+ """
162
+ # set slice_size = `None` to disable `attention slicing`
163
+ self.enable_attention_slicing(None)
164
+
165
+ @torch.no_grad()
166
+ def __call__(
167
+ self,
168
+ prompt: Union[str, List[str]],
169
+ height: int = 512,
170
+ width: int = 512,
171
+ num_inference_steps: int = 50,
172
+ guidance_scale: float = 7.5,
173
+ negative_prompt: Optional[Union[str, List[str]]] = None,
174
+ num_images_per_prompt: Optional[int] = 1,
175
+ eta: float = 0.0,
176
+ generator: Optional[torch.Generator] = None,
177
+ latents: Optional[torch.FloatTensor] = None,
178
+ output_type: Optional[str] = "pil",
179
+ return_dict: bool = True,
180
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
181
+ callback_steps: int = 1,
182
+ **kwargs,
183
+ ):
184
+ r"""
185
+ Function invoked when calling the pipeline for generation.
186
+
187
+ Args:
188
+ prompt (`str` or `List[str]`):
189
+ The prompt or prompts to guide the image generation. Can be in different languages.
190
+ height (`int`, *optional*, defaults to 512):
191
+ The height in pixels of the generated image.
192
+ width (`int`, *optional*, defaults to 512):
193
+ The width in pixels of the generated image.
194
+ num_inference_steps (`int`, *optional*, defaults to 50):
195
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
196
+ expense of slower inference.
197
+ guidance_scale (`float`, *optional*, defaults to 7.5):
198
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
199
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
200
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
201
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
202
+ usually at the expense of lower image quality.
203
+ negative_prompt (`str` or `List[str]`, *optional*):
204
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
205
+ if `guidance_scale` is less than `1`).
206
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
207
+ The number of images to generate per prompt.
208
+ eta (`float`, *optional*, defaults to 0.0):
209
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
210
+ [`schedulers.DDIMScheduler`], will be ignored for others.
211
+ generator (`torch.Generator`, *optional*):
212
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
213
+ deterministic.
214
+ latents (`torch.FloatTensor`, *optional*):
215
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
216
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
217
+ tensor will ge generated by sampling using the supplied random `generator`.
218
+ output_type (`str`, *optional*, defaults to `"pil"`):
219
+ The output format of the generate image. Choose between
220
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
221
+ return_dict (`bool`, *optional*, defaults to `True`):
222
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
223
+ plain tuple.
224
+ callback (`Callable`, *optional*):
225
+ A function that will be called every `callback_steps` steps during inference. The function will be
226
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
227
+ callback_steps (`int`, *optional*, defaults to 1):
228
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
229
+ called at every step.
230
+
231
+ Returns:
232
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
233
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
234
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
235
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
236
+ (nsfw) content, according to the `safety_checker`.
237
+ """
238
+ if isinstance(prompt, str):
239
+ batch_size = 1
240
+ elif isinstance(prompt, list):
241
+ batch_size = len(prompt)
242
+ else:
243
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
244
+
245
+ if height % 8 != 0 or width % 8 != 0:
246
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
247
+
248
+ if (callback_steps is None) or (
249
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
250
+ ):
251
+ raise ValueError(
252
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
253
+ f" {type(callback_steps)}."
254
+ )
255
+
256
+ # detect language and translate if necessary
257
+ prompt_language = detect_language(self.detection_pipeline, prompt, batch_size)
258
+ if batch_size == 1 and prompt_language != "en":
259
+ prompt = translate_prompt(prompt, self.translation_tokenizer, self.translation_model, self.device)
260
+
261
+ if isinstance(prompt, list):
262
+ for index in range(batch_size):
263
+ if prompt_language[index] != "en":
264
+ p = translate_prompt(
265
+ prompt[index], self.translation_tokenizer, self.translation_model, self.device
266
+ )
267
+ prompt[index] = p
268
+
269
+ # get prompt text embeddings
270
+ text_inputs = self.tokenizer(
271
+ prompt,
272
+ padding="max_length",
273
+ max_length=self.tokenizer.model_max_length,
274
+ return_tensors="pt",
275
+ )
276
+ text_input_ids = text_inputs.input_ids
277
+
278
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
279
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
280
+ logger.warning(
281
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
282
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
283
+ )
284
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
285
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
286
+
287
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
288
+ bs_embed, seq_len, _ = text_embeddings.shape
289
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
290
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
291
+
292
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
293
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
294
+ # corresponds to doing no classifier free guidance.
295
+ do_classifier_free_guidance = guidance_scale > 1.0
296
+ # get unconditional embeddings for classifier free guidance
297
+ if do_classifier_free_guidance:
298
+ uncond_tokens: List[str]
299
+ if negative_prompt is None:
300
+ uncond_tokens = [""] * batch_size
301
+ elif type(prompt) is not type(negative_prompt):
302
+ raise TypeError(
303
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
304
+ f" {type(prompt)}."
305
+ )
306
+ elif isinstance(negative_prompt, str):
307
+ # detect language and translate it if necessary
308
+ negative_prompt_language = detect_language(self.detection_pipeline, negative_prompt, batch_size)
309
+ if negative_prompt_language != "en":
310
+ negative_prompt = translate_prompt(
311
+ negative_prompt, self.translation_tokenizer, self.translation_model, self.device
312
+ )
313
+ if isinstance(negative_prompt, str):
314
+ uncond_tokens = [negative_prompt]
315
+ elif batch_size != len(negative_prompt):
316
+ raise ValueError(
317
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
318
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
319
+ " the batch size of `prompt`."
320
+ )
321
+ else:
322
+ # detect language and translate it if necessary
323
+ if isinstance(negative_prompt, list):
324
+ negative_prompt_languages = detect_language(self.detection_pipeline, negative_prompt, batch_size)
325
+ for index in range(batch_size):
326
+ if negative_prompt_languages[index] != "en":
327
+ p = translate_prompt(
328
+ negative_prompt[index], self.translation_tokenizer, self.translation_model, self.device
329
+ )
330
+ negative_prompt[index] = p
331
+ uncond_tokens = negative_prompt
332
+
333
+ max_length = text_input_ids.shape[-1]
334
+ uncond_input = self.tokenizer(
335
+ uncond_tokens,
336
+ padding="max_length",
337
+ max_length=max_length,
338
+ truncation=True,
339
+ return_tensors="pt",
340
+ )
341
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
342
+
343
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
344
+ seq_len = uncond_embeddings.shape[1]
345
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
346
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
347
+
348
+ # For classifier free guidance, we need to do two forward passes.
349
+ # Here we concatenate the unconditional and text embeddings into a single batch
350
+ # to avoid doing two forward passes
351
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
352
+
353
+ # get the initial random noise unless the user supplied it
354
+
355
+ # Unlike in other pipelines, latents need to be generated in the target device
356
+ # for 1-to-1 results reproducibility with the CompVis implementation.
357
+ # However this currently doesn't work in `mps`.
358
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8)
359
+ latents_dtype = text_embeddings.dtype
360
+ if latents is None:
361
+ if self.device.type == "mps":
362
+ # randn does not work reproducibly on mps
363
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
364
+ self.device
365
+ )
366
+ else:
367
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
368
+ else:
369
+ if latents.shape != latents_shape:
370
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
371
+ latents = latents.to(self.device)
372
+
373
+ # set timesteps
374
+ self.scheduler.set_timesteps(num_inference_steps)
375
+
376
+ # Some schedulers like PNDM have timesteps as arrays
377
+ # It's more optimized to move all timesteps to correct device beforehand
378
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
379
+
380
+ # scale the initial noise by the standard deviation required by the scheduler
381
+ latents = latents * self.scheduler.init_noise_sigma
382
+
383
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
384
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
385
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
386
+ # and should be between [0, 1]
387
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
388
+ extra_step_kwargs = {}
389
+ if accepts_eta:
390
+ extra_step_kwargs["eta"] = eta
391
+
392
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
393
+ # expand the latents if we are doing classifier free guidance
394
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
395
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
396
+
397
+ # predict the noise residual
398
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
399
+
400
+ # perform guidance
401
+ if do_classifier_free_guidance:
402
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
403
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
404
+
405
+ # compute the previous noisy sample x_t -> x_t-1
406
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
407
+
408
+ # call the callback, if provided
409
+ if callback is not None and i % callback_steps == 0:
410
+ callback(i, t, latents)
411
+
412
+ latents = 1 / 0.18215 * latents
413
+ image = self.vae.decode(latents).sample
414
+
415
+ image = (image / 2 + 0.5).clamp(0, 1)
416
+
417
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
418
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
419
+
420
+ if self.safety_checker is not None:
421
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
422
+ self.device
423
+ )
424
+ image, has_nsfw_concept = self.safety_checker(
425
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
426
+ )
427
+ else:
428
+ has_nsfw_concept = None
429
+
430
+ if output_type == "pil":
431
+ image = self.numpy_to_pil(image)
432
+
433
+ if not return_dict:
434
+ return (image, has_nsfw_concept)
435
+
436
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/one_step_unet.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ import torch
3
+
4
+ from diffusers import DiffusionPipeline
5
+
6
+
7
+ class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
8
+ def __init__(self, unet, scheduler):
9
+ super().__init__()
10
+
11
+ self.register_modules(unet=unet, scheduler=scheduler)
12
+
13
+ def __call__(self):
14
+ image = torch.randn(
15
+ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size),
16
+ )
17
+ timestep = 1
18
+
19
+ model_output = self.unet(image, timestep).sample
20
+ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample
21
+
22
+ result = scheduler_output - scheduler_output + torch.ones_like(scheduler_output)
23
+
24
+ return result
v0.19.2/sd_text2img_k_diffusion.py ADDED
@@ -0,0 +1,475 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import importlib
16
+ import warnings
17
+ from typing import Callable, List, Optional, Union
18
+
19
+ import torch
20
+ from k_diffusion.external import CompVisDenoiser, CompVisVDenoiser
21
+
22
+ from diffusers import DiffusionPipeline, LMSDiscreteScheduler
23
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
24
+ from diffusers.utils import is_accelerate_available, logging
25
+
26
+
27
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
28
+
29
+
30
+ class ModelWrapper:
31
+ def __init__(self, model, alphas_cumprod):
32
+ self.model = model
33
+ self.alphas_cumprod = alphas_cumprod
34
+
35
+ def apply_model(self, *args, **kwargs):
36
+ if len(args) == 3:
37
+ encoder_hidden_states = args[-1]
38
+ args = args[:2]
39
+ if kwargs.get("cond", None) is not None:
40
+ encoder_hidden_states = kwargs.pop("cond")
41
+ return self.model(*args, encoder_hidden_states=encoder_hidden_states, **kwargs).sample
42
+
43
+
44
+ class StableDiffusionPipeline(DiffusionPipeline):
45
+ r"""
46
+ Pipeline for text-to-image generation using Stable Diffusion.
47
+
48
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
49
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
50
+
51
+ Args:
52
+ vae ([`AutoencoderKL`]):
53
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
54
+ text_encoder ([`CLIPTextModel`]):
55
+ Frozen text-encoder. Stable Diffusion uses the text portion of
56
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
57
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
58
+ tokenizer (`CLIPTokenizer`):
59
+ Tokenizer of class
60
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
61
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
62
+ scheduler ([`SchedulerMixin`]):
63
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
64
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
65
+ safety_checker ([`StableDiffusionSafetyChecker`]):
66
+ Classification module that estimates whether generated images could be considered offensive or harmful.
67
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
68
+ feature_extractor ([`CLIPImageProcessor`]):
69
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
70
+ """
71
+ _optional_components = ["safety_checker", "feature_extractor"]
72
+
73
+ def __init__(
74
+ self,
75
+ vae,
76
+ text_encoder,
77
+ tokenizer,
78
+ unet,
79
+ scheduler,
80
+ safety_checker,
81
+ feature_extractor,
82
+ ):
83
+ super().__init__()
84
+
85
+ if safety_checker is None:
86
+ logger.warning(
87
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
88
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
89
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
90
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
91
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
92
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
93
+ )
94
+
95
+ # get correct sigmas from LMS
96
+ scheduler = LMSDiscreteScheduler.from_config(scheduler.config)
97
+ self.register_modules(
98
+ vae=vae,
99
+ text_encoder=text_encoder,
100
+ tokenizer=tokenizer,
101
+ unet=unet,
102
+ scheduler=scheduler,
103
+ safety_checker=safety_checker,
104
+ feature_extractor=feature_extractor,
105
+ )
106
+
107
+ model = ModelWrapper(unet, scheduler.alphas_cumprod)
108
+ if scheduler.config.prediction_type == "v_prediction":
109
+ self.k_diffusion_model = CompVisVDenoiser(model)
110
+ else:
111
+ self.k_diffusion_model = CompVisDenoiser(model)
112
+
113
+ def set_sampler(self, scheduler_type: str):
114
+ warnings.warn("The `set_sampler` method is deprecated, please use `set_scheduler` instead.")
115
+ return self.set_scheduler(scheduler_type)
116
+
117
+ def set_scheduler(self, scheduler_type: str):
118
+ library = importlib.import_module("k_diffusion")
119
+ sampling = getattr(library, "sampling")
120
+ self.sampler = getattr(sampling, scheduler_type)
121
+
122
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
123
+ r"""
124
+ Enable sliced attention computation.
125
+
126
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
127
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
128
+
129
+ Args:
130
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
131
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
132
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
133
+ `attention_head_dim` must be a multiple of `slice_size`.
134
+ """
135
+ if slice_size == "auto":
136
+ # half the attention head size is usually a good trade-off between
137
+ # speed and memory
138
+ slice_size = self.unet.config.attention_head_dim // 2
139
+ self.unet.set_attention_slice(slice_size)
140
+
141
+ def disable_attention_slicing(self):
142
+ r"""
143
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
144
+ back to computing attention in one step.
145
+ """
146
+ # set slice_size = `None` to disable `attention slicing`
147
+ self.enable_attention_slicing(None)
148
+
149
+ def enable_sequential_cpu_offload(self, gpu_id=0):
150
+ r"""
151
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
152
+ text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
153
+ `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
154
+ """
155
+ if is_accelerate_available():
156
+ from accelerate import cpu_offload
157
+ else:
158
+ raise ImportError("Please install accelerate via `pip install accelerate`")
159
+
160
+ device = torch.device(f"cuda:{gpu_id}")
161
+
162
+ for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.safety_checker]:
163
+ if cpu_offloaded_model is not None:
164
+ cpu_offload(cpu_offloaded_model, device)
165
+
166
+ @property
167
+ def _execution_device(self):
168
+ r"""
169
+ Returns the device on which the pipeline's models will be executed. After calling
170
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
171
+ hooks.
172
+ """
173
+ if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
174
+ return self.device
175
+ for module in self.unet.modules():
176
+ if (
177
+ hasattr(module, "_hf_hook")
178
+ and hasattr(module._hf_hook, "execution_device")
179
+ and module._hf_hook.execution_device is not None
180
+ ):
181
+ return torch.device(module._hf_hook.execution_device)
182
+ return self.device
183
+
184
+ def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):
185
+ r"""
186
+ Encodes the prompt into text encoder hidden states.
187
+
188
+ Args:
189
+ prompt (`str` or `list(int)`):
190
+ prompt to be encoded
191
+ device: (`torch.device`):
192
+ torch device
193
+ num_images_per_prompt (`int`):
194
+ number of images that should be generated per prompt
195
+ do_classifier_free_guidance (`bool`):
196
+ whether to use classifier free guidance or not
197
+ negative_prompt (`str` or `List[str]`):
198
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
199
+ if `guidance_scale` is less than `1`).
200
+ """
201
+ batch_size = len(prompt) if isinstance(prompt, list) else 1
202
+
203
+ text_inputs = self.tokenizer(
204
+ prompt,
205
+ padding="max_length",
206
+ max_length=self.tokenizer.model_max_length,
207
+ truncation=True,
208
+ return_tensors="pt",
209
+ )
210
+ text_input_ids = text_inputs.input_ids
211
+ untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids
212
+
213
+ if not torch.equal(text_input_ids, untruncated_ids):
214
+ removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
215
+ logger.warning(
216
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
217
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
218
+ )
219
+
220
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
221
+ attention_mask = text_inputs.attention_mask.to(device)
222
+ else:
223
+ attention_mask = None
224
+
225
+ text_embeddings = self.text_encoder(
226
+ text_input_ids.to(device),
227
+ attention_mask=attention_mask,
228
+ )
229
+ text_embeddings = text_embeddings[0]
230
+
231
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
232
+ bs_embed, seq_len, _ = text_embeddings.shape
233
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
234
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
235
+
236
+ # get unconditional embeddings for classifier free guidance
237
+ if do_classifier_free_guidance:
238
+ uncond_tokens: List[str]
239
+ if negative_prompt is None:
240
+ uncond_tokens = [""] * batch_size
241
+ elif type(prompt) is not type(negative_prompt):
242
+ raise TypeError(
243
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
244
+ f" {type(prompt)}."
245
+ )
246
+ elif isinstance(negative_prompt, str):
247
+ uncond_tokens = [negative_prompt]
248
+ elif batch_size != len(negative_prompt):
249
+ raise ValueError(
250
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
251
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
252
+ " the batch size of `prompt`."
253
+ )
254
+ else:
255
+ uncond_tokens = negative_prompt
256
+
257
+ max_length = text_input_ids.shape[-1]
258
+ uncond_input = self.tokenizer(
259
+ uncond_tokens,
260
+ padding="max_length",
261
+ max_length=max_length,
262
+ truncation=True,
263
+ return_tensors="pt",
264
+ )
265
+
266
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
267
+ attention_mask = uncond_input.attention_mask.to(device)
268
+ else:
269
+ attention_mask = None
270
+
271
+ uncond_embeddings = self.text_encoder(
272
+ uncond_input.input_ids.to(device),
273
+ attention_mask=attention_mask,
274
+ )
275
+ uncond_embeddings = uncond_embeddings[0]
276
+
277
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
278
+ seq_len = uncond_embeddings.shape[1]
279
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
280
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
281
+
282
+ # For classifier free guidance, we need to do two forward passes.
283
+ # Here we concatenate the unconditional and text embeddings into a single batch
284
+ # to avoid doing two forward passes
285
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
286
+
287
+ return text_embeddings
288
+
289
+ def run_safety_checker(self, image, device, dtype):
290
+ if self.safety_checker is not None:
291
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
292
+ image, has_nsfw_concept = self.safety_checker(
293
+ images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
294
+ )
295
+ else:
296
+ has_nsfw_concept = None
297
+ return image, has_nsfw_concept
298
+
299
+ def decode_latents(self, latents):
300
+ latents = 1 / 0.18215 * latents
301
+ image = self.vae.decode(latents).sample
302
+ image = (image / 2 + 0.5).clamp(0, 1)
303
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
304
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
305
+ return image
306
+
307
+ def check_inputs(self, prompt, height, width, callback_steps):
308
+ if not isinstance(prompt, str) and not isinstance(prompt, list):
309
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
310
+
311
+ if height % 8 != 0 or width % 8 != 0:
312
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
313
+
314
+ if (callback_steps is None) or (
315
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
316
+ ):
317
+ raise ValueError(
318
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
319
+ f" {type(callback_steps)}."
320
+ )
321
+
322
+ def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
323
+ shape = (batch_size, num_channels_latents, height // 8, width // 8)
324
+ if latents is None:
325
+ if device.type == "mps":
326
+ # randn does not work reproducibly on mps
327
+ latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device)
328
+ else:
329
+ latents = torch.randn(shape, generator=generator, device=device, dtype=dtype)
330
+ else:
331
+ if latents.shape != shape:
332
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
333
+ latents = latents.to(device)
334
+
335
+ # scale the initial noise by the standard deviation required by the scheduler
336
+ return latents
337
+
338
+ @torch.no_grad()
339
+ def __call__(
340
+ self,
341
+ prompt: Union[str, List[str]],
342
+ height: int = 512,
343
+ width: int = 512,
344
+ num_inference_steps: int = 50,
345
+ guidance_scale: float = 7.5,
346
+ negative_prompt: Optional[Union[str, List[str]]] = None,
347
+ num_images_per_prompt: Optional[int] = 1,
348
+ eta: float = 0.0,
349
+ generator: Optional[torch.Generator] = None,
350
+ latents: Optional[torch.FloatTensor] = None,
351
+ output_type: Optional[str] = "pil",
352
+ return_dict: bool = True,
353
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
354
+ callback_steps: int = 1,
355
+ **kwargs,
356
+ ):
357
+ r"""
358
+ Function invoked when calling the pipeline for generation.
359
+
360
+ Args:
361
+ prompt (`str` or `List[str]`):
362
+ The prompt or prompts to guide the image generation.
363
+ height (`int`, *optional*, defaults to 512):
364
+ The height in pixels of the generated image.
365
+ width (`int`, *optional*, defaults to 512):
366
+ The width in pixels of the generated image.
367
+ num_inference_steps (`int`, *optional*, defaults to 50):
368
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
369
+ expense of slower inference.
370
+ guidance_scale (`float`, *optional*, defaults to 7.5):
371
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
372
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
373
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
374
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
375
+ usually at the expense of lower image quality.
376
+ negative_prompt (`str` or `List[str]`, *optional*):
377
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
378
+ if `guidance_scale` is less than `1`).
379
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
380
+ The number of images to generate per prompt.
381
+ eta (`float`, *optional*, defaults to 0.0):
382
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
383
+ [`schedulers.DDIMScheduler`], will be ignored for others.
384
+ generator (`torch.Generator`, *optional*):
385
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
386
+ deterministic.
387
+ latents (`torch.FloatTensor`, *optional*):
388
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
389
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
390
+ tensor will ge generated by sampling using the supplied random `generator`.
391
+ output_type (`str`, *optional*, defaults to `"pil"`):
392
+ The output format of the generate image. Choose between
393
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
394
+ return_dict (`bool`, *optional*, defaults to `True`):
395
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
396
+ plain tuple.
397
+ callback (`Callable`, *optional*):
398
+ A function that will be called every `callback_steps` steps during inference. The function will be
399
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
400
+ callback_steps (`int`, *optional*, defaults to 1):
401
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
402
+ called at every step.
403
+
404
+ Returns:
405
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
406
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
407
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
408
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
409
+ (nsfw) content, according to the `safety_checker`.
410
+ """
411
+
412
+ # 1. Check inputs. Raise error if not correct
413
+ self.check_inputs(prompt, height, width, callback_steps)
414
+
415
+ # 2. Define call parameters
416
+ batch_size = 1 if isinstance(prompt, str) else len(prompt)
417
+ device = self._execution_device
418
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
419
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
420
+ # corresponds to doing no classifier free guidance.
421
+ do_classifier_free_guidance = True
422
+ if guidance_scale <= 1.0:
423
+ raise ValueError("has to use guidance_scale")
424
+
425
+ # 3. Encode input prompt
426
+ text_embeddings = self._encode_prompt(
427
+ prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
428
+ )
429
+
430
+ # 4. Prepare timesteps
431
+ self.scheduler.set_timesteps(num_inference_steps, device=text_embeddings.device)
432
+ sigmas = self.scheduler.sigmas
433
+ sigmas = sigmas.to(text_embeddings.dtype)
434
+
435
+ # 5. Prepare latent variables
436
+ num_channels_latents = self.unet.config.in_channels
437
+ latents = self.prepare_latents(
438
+ batch_size * num_images_per_prompt,
439
+ num_channels_latents,
440
+ height,
441
+ width,
442
+ text_embeddings.dtype,
443
+ device,
444
+ generator,
445
+ latents,
446
+ )
447
+ latents = latents * sigmas[0]
448
+ self.k_diffusion_model.sigmas = self.k_diffusion_model.sigmas.to(latents.device)
449
+ self.k_diffusion_model.log_sigmas = self.k_diffusion_model.log_sigmas.to(latents.device)
450
+
451
+ def model_fn(x, t):
452
+ latent_model_input = torch.cat([x] * 2)
453
+
454
+ noise_pred = self.k_diffusion_model(latent_model_input, t, cond=text_embeddings)
455
+
456
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
457
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
458
+ return noise_pred
459
+
460
+ latents = self.sampler(model_fn, latents, sigmas)
461
+
462
+ # 8. Post-processing
463
+ image = self.decode_latents(latents)
464
+
465
+ # 9. Run safety checker
466
+ image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype)
467
+
468
+ # 10. Convert to PIL
469
+ if output_type == "pil":
470
+ image = self.numpy_to_pil(image)
471
+
472
+ if not return_dict:
473
+ return (image, has_nsfw_concept)
474
+
475
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/seed_resize_stable_diffusion.py ADDED
@@ -0,0 +1,366 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ modified based on diffusion library from Huggingface: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
3
+ """
4
+ import inspect
5
+ from typing import Callable, List, Optional, Union
6
+
7
+ import torch
8
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
9
+
10
+ from diffusers import DiffusionPipeline
11
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
12
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
13
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
14
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
15
+ from diffusers.utils import logging
16
+
17
+
18
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
19
+
20
+
21
+ class SeedResizeStableDiffusionPipeline(DiffusionPipeline):
22
+ r"""
23
+ Pipeline for text-to-image generation using Stable Diffusion.
24
+
25
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
26
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
27
+
28
+ Args:
29
+ vae ([`AutoencoderKL`]):
30
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
31
+ text_encoder ([`CLIPTextModel`]):
32
+ Frozen text-encoder. Stable Diffusion uses the text portion of
33
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
34
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
35
+ tokenizer (`CLIPTokenizer`):
36
+ Tokenizer of class
37
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
38
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
39
+ scheduler ([`SchedulerMixin`]):
40
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
41
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
42
+ safety_checker ([`StableDiffusionSafetyChecker`]):
43
+ Classification module that estimates whether generated images could be considered offensive or harmful.
44
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
45
+ feature_extractor ([`CLIPImageProcessor`]):
46
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
47
+ """
48
+
49
+ def __init__(
50
+ self,
51
+ vae: AutoencoderKL,
52
+ text_encoder: CLIPTextModel,
53
+ tokenizer: CLIPTokenizer,
54
+ unet: UNet2DConditionModel,
55
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
56
+ safety_checker: StableDiffusionSafetyChecker,
57
+ feature_extractor: CLIPImageProcessor,
58
+ ):
59
+ super().__init__()
60
+ self.register_modules(
61
+ vae=vae,
62
+ text_encoder=text_encoder,
63
+ tokenizer=tokenizer,
64
+ unet=unet,
65
+ scheduler=scheduler,
66
+ safety_checker=safety_checker,
67
+ feature_extractor=feature_extractor,
68
+ )
69
+
70
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
71
+ r"""
72
+ Enable sliced attention computation.
73
+
74
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
75
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
76
+
77
+ Args:
78
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
79
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
80
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
81
+ `attention_head_dim` must be a multiple of `slice_size`.
82
+ """
83
+ if slice_size == "auto":
84
+ # half the attention head size is usually a good trade-off between
85
+ # speed and memory
86
+ slice_size = self.unet.config.attention_head_dim // 2
87
+ self.unet.set_attention_slice(slice_size)
88
+
89
+ def disable_attention_slicing(self):
90
+ r"""
91
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
92
+ back to computing attention in one step.
93
+ """
94
+ # set slice_size = `None` to disable `attention slicing`
95
+ self.enable_attention_slicing(None)
96
+
97
+ @torch.no_grad()
98
+ def __call__(
99
+ self,
100
+ prompt: Union[str, List[str]],
101
+ height: int = 512,
102
+ width: int = 512,
103
+ num_inference_steps: int = 50,
104
+ guidance_scale: float = 7.5,
105
+ negative_prompt: Optional[Union[str, List[str]]] = None,
106
+ num_images_per_prompt: Optional[int] = 1,
107
+ eta: float = 0.0,
108
+ generator: Optional[torch.Generator] = None,
109
+ latents: Optional[torch.FloatTensor] = None,
110
+ output_type: Optional[str] = "pil",
111
+ return_dict: bool = True,
112
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
113
+ callback_steps: int = 1,
114
+ text_embeddings: Optional[torch.FloatTensor] = None,
115
+ **kwargs,
116
+ ):
117
+ r"""
118
+ Function invoked when calling the pipeline for generation.
119
+
120
+ Args:
121
+ prompt (`str` or `List[str]`):
122
+ The prompt or prompts to guide the image generation.
123
+ height (`int`, *optional*, defaults to 512):
124
+ The height in pixels of the generated image.
125
+ width (`int`, *optional*, defaults to 512):
126
+ The width in pixels of the generated image.
127
+ num_inference_steps (`int`, *optional*, defaults to 50):
128
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
129
+ expense of slower inference.
130
+ guidance_scale (`float`, *optional*, defaults to 7.5):
131
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
132
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
133
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
134
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
135
+ usually at the expense of lower image quality.
136
+ negative_prompt (`str` or `List[str]`, *optional*):
137
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
138
+ if `guidance_scale` is less than `1`).
139
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
140
+ The number of images to generate per prompt.
141
+ eta (`float`, *optional*, defaults to 0.0):
142
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
143
+ [`schedulers.DDIMScheduler`], will be ignored for others.
144
+ generator (`torch.Generator`, *optional*):
145
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
146
+ deterministic.
147
+ latents (`torch.FloatTensor`, *optional*):
148
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
149
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
150
+ tensor will ge generated by sampling using the supplied random `generator`.
151
+ output_type (`str`, *optional*, defaults to `"pil"`):
152
+ The output format of the generate image. Choose between
153
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
154
+ return_dict (`bool`, *optional*, defaults to `True`):
155
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
156
+ plain tuple.
157
+ callback (`Callable`, *optional*):
158
+ A function that will be called every `callback_steps` steps during inference. The function will be
159
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
160
+ callback_steps (`int`, *optional*, defaults to 1):
161
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
162
+ called at every step.
163
+
164
+ Returns:
165
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
166
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
167
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
168
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
169
+ (nsfw) content, according to the `safety_checker`.
170
+ """
171
+
172
+ if isinstance(prompt, str):
173
+ batch_size = 1
174
+ elif isinstance(prompt, list):
175
+ batch_size = len(prompt)
176
+ else:
177
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
178
+
179
+ if height % 8 != 0 or width % 8 != 0:
180
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
181
+
182
+ if (callback_steps is None) or (
183
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
184
+ ):
185
+ raise ValueError(
186
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
187
+ f" {type(callback_steps)}."
188
+ )
189
+
190
+ # get prompt text embeddings
191
+ text_inputs = self.tokenizer(
192
+ prompt,
193
+ padding="max_length",
194
+ max_length=self.tokenizer.model_max_length,
195
+ return_tensors="pt",
196
+ )
197
+ text_input_ids = text_inputs.input_ids
198
+
199
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
200
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
201
+ logger.warning(
202
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
203
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
204
+ )
205
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
206
+
207
+ if text_embeddings is None:
208
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
209
+
210
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
211
+ bs_embed, seq_len, _ = text_embeddings.shape
212
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
213
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
214
+
215
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
216
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
217
+ # corresponds to doing no classifier free guidance.
218
+ do_classifier_free_guidance = guidance_scale > 1.0
219
+ # get unconditional embeddings for classifier free guidance
220
+ if do_classifier_free_guidance:
221
+ uncond_tokens: List[str]
222
+ if negative_prompt is None:
223
+ uncond_tokens = [""]
224
+ elif type(prompt) is not type(negative_prompt):
225
+ raise TypeError(
226
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
227
+ f" {type(prompt)}."
228
+ )
229
+ elif isinstance(negative_prompt, str):
230
+ uncond_tokens = [negative_prompt]
231
+ elif batch_size != len(negative_prompt):
232
+ raise ValueError(
233
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
234
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
235
+ " the batch size of `prompt`."
236
+ )
237
+ else:
238
+ uncond_tokens = negative_prompt
239
+
240
+ max_length = text_input_ids.shape[-1]
241
+ uncond_input = self.tokenizer(
242
+ uncond_tokens,
243
+ padding="max_length",
244
+ max_length=max_length,
245
+ truncation=True,
246
+ return_tensors="pt",
247
+ )
248
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
249
+
250
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
251
+ seq_len = uncond_embeddings.shape[1]
252
+ uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1)
253
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
254
+
255
+ # For classifier free guidance, we need to do two forward passes.
256
+ # Here we concatenate the unconditional and text embeddings into a single batch
257
+ # to avoid doing two forward passes
258
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
259
+
260
+ # get the initial random noise unless the user supplied it
261
+
262
+ # Unlike in other pipelines, latents need to be generated in the target device
263
+ # for 1-to-1 results reproducibility with the CompVis implementation.
264
+ # However this currently doesn't work in `mps`.
265
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8)
266
+ latents_shape_reference = (batch_size * num_images_per_prompt, self.unet.config.in_channels, 64, 64)
267
+ latents_dtype = text_embeddings.dtype
268
+ if latents is None:
269
+ if self.device.type == "mps":
270
+ # randn does not exist on mps
271
+ latents_reference = torch.randn(
272
+ latents_shape_reference, generator=generator, device="cpu", dtype=latents_dtype
273
+ ).to(self.device)
274
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
275
+ self.device
276
+ )
277
+ else:
278
+ latents_reference = torch.randn(
279
+ latents_shape_reference, generator=generator, device=self.device, dtype=latents_dtype
280
+ )
281
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
282
+ else:
283
+ if latents_reference.shape != latents_shape:
284
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
285
+ latents_reference = latents_reference.to(self.device)
286
+ latents = latents.to(self.device)
287
+
288
+ # This is the key part of the pipeline where we
289
+ # try to ensure that the generated images w/ the same seed
290
+ # but different sizes actually result in similar images
291
+ dx = (latents_shape[3] - latents_shape_reference[3]) // 2
292
+ dy = (latents_shape[2] - latents_shape_reference[2]) // 2
293
+ w = latents_shape_reference[3] if dx >= 0 else latents_shape_reference[3] + 2 * dx
294
+ h = latents_shape_reference[2] if dy >= 0 else latents_shape_reference[2] + 2 * dy
295
+ tx = 0 if dx < 0 else dx
296
+ ty = 0 if dy < 0 else dy
297
+ dx = max(-dx, 0)
298
+ dy = max(-dy, 0)
299
+ # import pdb
300
+ # pdb.set_trace()
301
+ latents[:, :, ty : ty + h, tx : tx + w] = latents_reference[:, :, dy : dy + h, dx : dx + w]
302
+
303
+ # set timesteps
304
+ self.scheduler.set_timesteps(num_inference_steps)
305
+
306
+ # Some schedulers like PNDM have timesteps as arrays
307
+ # It's more optimized to move all timesteps to correct device beforehand
308
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
309
+
310
+ # scale the initial noise by the standard deviation required by the scheduler
311
+ latents = latents * self.scheduler.init_noise_sigma
312
+
313
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
314
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
315
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
316
+ # and should be between [0, 1]
317
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
318
+ extra_step_kwargs = {}
319
+ if accepts_eta:
320
+ extra_step_kwargs["eta"] = eta
321
+
322
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
323
+ # expand the latents if we are doing classifier free guidance
324
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
325
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
326
+
327
+ # predict the noise residual
328
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
329
+
330
+ # perform guidance
331
+ if do_classifier_free_guidance:
332
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
333
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
334
+
335
+ # compute the previous noisy sample x_t -> x_t-1
336
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
337
+
338
+ # call the callback, if provided
339
+ if callback is not None and i % callback_steps == 0:
340
+ callback(i, t, latents)
341
+
342
+ latents = 1 / 0.18215 * latents
343
+ image = self.vae.decode(latents).sample
344
+
345
+ image = (image / 2 + 0.5).clamp(0, 1)
346
+
347
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
348
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
349
+
350
+ if self.safety_checker is not None:
351
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
352
+ self.device
353
+ )
354
+ image, has_nsfw_concept = self.safety_checker(
355
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
356
+ )
357
+ else:
358
+ has_nsfw_concept = None
359
+
360
+ if output_type == "pil":
361
+ image = self.numpy_to_pil(image)
362
+
363
+ if not return_dict:
364
+ return (image, has_nsfw_concept)
365
+
366
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/speech_to_image_diffusion.py ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import Callable, List, Optional, Union
3
+
4
+ import torch
5
+ from transformers import (
6
+ CLIPImageProcessor,
7
+ CLIPTextModel,
8
+ CLIPTokenizer,
9
+ WhisperForConditionalGeneration,
10
+ WhisperProcessor,
11
+ )
12
+
13
+ from diffusers import (
14
+ AutoencoderKL,
15
+ DDIMScheduler,
16
+ DiffusionPipeline,
17
+ LMSDiscreteScheduler,
18
+ PNDMScheduler,
19
+ UNet2DConditionModel,
20
+ )
21
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
22
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
23
+ from diffusers.utils import logging
24
+
25
+
26
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
27
+
28
+
29
+ class SpeechToImagePipeline(DiffusionPipeline):
30
+ def __init__(
31
+ self,
32
+ speech_model: WhisperForConditionalGeneration,
33
+ speech_processor: WhisperProcessor,
34
+ vae: AutoencoderKL,
35
+ text_encoder: CLIPTextModel,
36
+ tokenizer: CLIPTokenizer,
37
+ unet: UNet2DConditionModel,
38
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
39
+ safety_checker: StableDiffusionSafetyChecker,
40
+ feature_extractor: CLIPImageProcessor,
41
+ ):
42
+ super().__init__()
43
+
44
+ if safety_checker is None:
45
+ logger.warning(
46
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
47
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
48
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
49
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
50
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
51
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
52
+ )
53
+
54
+ self.register_modules(
55
+ speech_model=speech_model,
56
+ speech_processor=speech_processor,
57
+ vae=vae,
58
+ text_encoder=text_encoder,
59
+ tokenizer=tokenizer,
60
+ unet=unet,
61
+ scheduler=scheduler,
62
+ feature_extractor=feature_extractor,
63
+ )
64
+
65
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
66
+ if slice_size == "auto":
67
+ slice_size = self.unet.config.attention_head_dim // 2
68
+ self.unet.set_attention_slice(slice_size)
69
+
70
+ def disable_attention_slicing(self):
71
+ self.enable_attention_slicing(None)
72
+
73
+ @torch.no_grad()
74
+ def __call__(
75
+ self,
76
+ audio,
77
+ sampling_rate=16_000,
78
+ height: int = 512,
79
+ width: int = 512,
80
+ num_inference_steps: int = 50,
81
+ guidance_scale: float = 7.5,
82
+ negative_prompt: Optional[Union[str, List[str]]] = None,
83
+ num_images_per_prompt: Optional[int] = 1,
84
+ eta: float = 0.0,
85
+ generator: Optional[torch.Generator] = None,
86
+ latents: Optional[torch.FloatTensor] = None,
87
+ output_type: Optional[str] = "pil",
88
+ return_dict: bool = True,
89
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
90
+ callback_steps: int = 1,
91
+ **kwargs,
92
+ ):
93
+ inputs = self.speech_processor.feature_extractor(
94
+ audio, return_tensors="pt", sampling_rate=sampling_rate
95
+ ).input_features.to(self.device)
96
+ predicted_ids = self.speech_model.generate(inputs, max_length=480_000)
97
+
98
+ prompt = self.speech_processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True, normalize=True)[
99
+ 0
100
+ ]
101
+
102
+ if isinstance(prompt, str):
103
+ batch_size = 1
104
+ elif isinstance(prompt, list):
105
+ batch_size = len(prompt)
106
+ else:
107
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
108
+
109
+ if height % 8 != 0 or width % 8 != 0:
110
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
111
+
112
+ if (callback_steps is None) or (
113
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
114
+ ):
115
+ raise ValueError(
116
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
117
+ f" {type(callback_steps)}."
118
+ )
119
+
120
+ # get prompt text embeddings
121
+ text_inputs = self.tokenizer(
122
+ prompt,
123
+ padding="max_length",
124
+ max_length=self.tokenizer.model_max_length,
125
+ return_tensors="pt",
126
+ )
127
+ text_input_ids = text_inputs.input_ids
128
+
129
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
130
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
131
+ logger.warning(
132
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
133
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
134
+ )
135
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
136
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
137
+
138
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
139
+ bs_embed, seq_len, _ = text_embeddings.shape
140
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
141
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
142
+
143
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
144
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
145
+ # corresponds to doing no classifier free guidance.
146
+ do_classifier_free_guidance = guidance_scale > 1.0
147
+ # get unconditional embeddings for classifier free guidance
148
+ if do_classifier_free_guidance:
149
+ uncond_tokens: List[str]
150
+ if negative_prompt is None:
151
+ uncond_tokens = [""] * batch_size
152
+ elif type(prompt) is not type(negative_prompt):
153
+ raise TypeError(
154
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
155
+ f" {type(prompt)}."
156
+ )
157
+ elif isinstance(negative_prompt, str):
158
+ uncond_tokens = [negative_prompt]
159
+ elif batch_size != len(negative_prompt):
160
+ raise ValueError(
161
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
162
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
163
+ " the batch size of `prompt`."
164
+ )
165
+ else:
166
+ uncond_tokens = negative_prompt
167
+
168
+ max_length = text_input_ids.shape[-1]
169
+ uncond_input = self.tokenizer(
170
+ uncond_tokens,
171
+ padding="max_length",
172
+ max_length=max_length,
173
+ truncation=True,
174
+ return_tensors="pt",
175
+ )
176
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
177
+
178
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
179
+ seq_len = uncond_embeddings.shape[1]
180
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
181
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
182
+
183
+ # For classifier free guidance, we need to do two forward passes.
184
+ # Here we concatenate the unconditional and text embeddings into a single batch
185
+ # to avoid doing two forward passes
186
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
187
+
188
+ # get the initial random noise unless the user supplied it
189
+
190
+ # Unlike in other pipelines, latents need to be generated in the target device
191
+ # for 1-to-1 results reproducibility with the CompVis implementation.
192
+ # However this currently doesn't work in `mps`.
193
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8)
194
+ latents_dtype = text_embeddings.dtype
195
+ if latents is None:
196
+ if self.device.type == "mps":
197
+ # randn does not exist on mps
198
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
199
+ self.device
200
+ )
201
+ else:
202
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
203
+ else:
204
+ if latents.shape != latents_shape:
205
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
206
+ latents = latents.to(self.device)
207
+
208
+ # set timesteps
209
+ self.scheduler.set_timesteps(num_inference_steps)
210
+
211
+ # Some schedulers like PNDM have timesteps as arrays
212
+ # It's more optimized to move all timesteps to correct device beforehand
213
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
214
+
215
+ # scale the initial noise by the standard deviation required by the scheduler
216
+ latents = latents * self.scheduler.init_noise_sigma
217
+
218
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
219
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
220
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
221
+ # and should be between [0, 1]
222
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
223
+ extra_step_kwargs = {}
224
+ if accepts_eta:
225
+ extra_step_kwargs["eta"] = eta
226
+
227
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
228
+ # expand the latents if we are doing classifier free guidance
229
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
230
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
231
+
232
+ # predict the noise residual
233
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
234
+
235
+ # perform guidance
236
+ if do_classifier_free_guidance:
237
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
238
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
239
+
240
+ # compute the previous noisy sample x_t -> x_t-1
241
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
242
+
243
+ # call the callback, if provided
244
+ if callback is not None and i % callback_steps == 0:
245
+ callback(i, t, latents)
246
+
247
+ latents = 1 / 0.18215 * latents
248
+ image = self.vae.decode(latents).sample
249
+
250
+ image = (image / 2 + 0.5).clamp(0, 1)
251
+
252
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
253
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
254
+
255
+ if output_type == "pil":
256
+ image = self.numpy_to_pil(image)
257
+
258
+ if not return_dict:
259
+ return image
260
+
261
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None)
v0.19.2/stable_diffusion_comparison.py ADDED
@@ -0,0 +1,405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any, Callable, Dict, List, Optional, Union
2
+
3
+ import torch
4
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
5
+
6
+ from diffusers import (
7
+ AutoencoderKL,
8
+ DDIMScheduler,
9
+ DiffusionPipeline,
10
+ LMSDiscreteScheduler,
11
+ PNDMScheduler,
12
+ StableDiffusionPipeline,
13
+ UNet2DConditionModel,
14
+ )
15
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
16
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
17
+
18
+
19
+ pipe1_model_id = "CompVis/stable-diffusion-v1-1"
20
+ pipe2_model_id = "CompVis/stable-diffusion-v1-2"
21
+ pipe3_model_id = "CompVis/stable-diffusion-v1-3"
22
+ pipe4_model_id = "CompVis/stable-diffusion-v1-4"
23
+
24
+
25
+ class StableDiffusionComparisonPipeline(DiffusionPipeline):
26
+ r"""
27
+ Pipeline for parallel comparison of Stable Diffusion v1-v4
28
+ This pipeline inherits from DiffusionPipeline and depends on the use of an Auth Token for
29
+ downloading pre-trained checkpoints from Hugging Face Hub.
30
+ If using Hugging Face Hub, pass the Model ID for Stable Diffusion v1.4 as the previous 3 checkpoints will be loaded
31
+ automatically.
32
+ Args:
33
+ vae ([`AutoencoderKL`]):
34
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
35
+ text_encoder ([`CLIPTextModel`]):
36
+ Frozen text-encoder. Stable Diffusion uses the text portion of
37
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
38
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
39
+ tokenizer (`CLIPTokenizer`):
40
+ Tokenizer of class
41
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
42
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
43
+ scheduler ([`SchedulerMixin`]):
44
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
45
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
46
+ safety_checker ([`StableDiffusionMegaSafetyChecker`]):
47
+ Classification module that estimates whether generated images could be considered offensive or harmful.
48
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
49
+ feature_extractor ([`CLIPImageProcessor`]):
50
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
51
+ """
52
+
53
+ def __init__(
54
+ self,
55
+ vae: AutoencoderKL,
56
+ text_encoder: CLIPTextModel,
57
+ tokenizer: CLIPTokenizer,
58
+ unet: UNet2DConditionModel,
59
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
60
+ safety_checker: StableDiffusionSafetyChecker,
61
+ feature_extractor: CLIPImageProcessor,
62
+ requires_safety_checker: bool = True,
63
+ ):
64
+ super()._init_()
65
+
66
+ self.pipe1 = StableDiffusionPipeline.from_pretrained(pipe1_model_id)
67
+ self.pipe2 = StableDiffusionPipeline.from_pretrained(pipe2_model_id)
68
+ self.pipe3 = StableDiffusionPipeline.from_pretrained(pipe3_model_id)
69
+ self.pipe4 = StableDiffusionPipeline(
70
+ vae=vae,
71
+ text_encoder=text_encoder,
72
+ tokenizer=tokenizer,
73
+ unet=unet,
74
+ scheduler=scheduler,
75
+ safety_checker=safety_checker,
76
+ feature_extractor=feature_extractor,
77
+ requires_safety_checker=requires_safety_checker,
78
+ )
79
+
80
+ self.register_modules(pipeline1=self.pipe1, pipeline2=self.pipe2, pipeline3=self.pipe3, pipeline4=self.pipe4)
81
+
82
+ @property
83
+ def layers(self) -> Dict[str, Any]:
84
+ return {k: getattr(self, k) for k in self.config.keys() if not k.startswith("_")}
85
+
86
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
87
+ r"""
88
+ Enable sliced attention computation.
89
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
90
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
91
+ Args:
92
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
93
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
94
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
95
+ `attention_head_dim` must be a multiple of `slice_size`.
96
+ """
97
+ if slice_size == "auto":
98
+ # half the attention head size is usually a good trade-off between
99
+ # speed and memory
100
+ slice_size = self.unet.config.attention_head_dim // 2
101
+ self.unet.set_attention_slice(slice_size)
102
+
103
+ def disable_attention_slicing(self):
104
+ r"""
105
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
106
+ back to computing attention in one step.
107
+ """
108
+ # set slice_size = `None` to disable `attention slicing`
109
+ self.enable_attention_slicing(None)
110
+
111
+ @torch.no_grad()
112
+ def text2img_sd1_1(
113
+ self,
114
+ prompt: Union[str, List[str]],
115
+ height: int = 512,
116
+ width: int = 512,
117
+ num_inference_steps: int = 50,
118
+ guidance_scale: float = 7.5,
119
+ negative_prompt: Optional[Union[str, List[str]]] = None,
120
+ num_images_per_prompt: Optional[int] = 1,
121
+ eta: float = 0.0,
122
+ generator: Optional[torch.Generator] = None,
123
+ latents: Optional[torch.FloatTensor] = None,
124
+ output_type: Optional[str] = "pil",
125
+ return_dict: bool = True,
126
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
127
+ callback_steps: int = 1,
128
+ **kwargs,
129
+ ):
130
+ return self.pipe1(
131
+ prompt=prompt,
132
+ height=height,
133
+ width=width,
134
+ num_inference_steps=num_inference_steps,
135
+ guidance_scale=guidance_scale,
136
+ negative_prompt=negative_prompt,
137
+ num_images_per_prompt=num_images_per_prompt,
138
+ eta=eta,
139
+ generator=generator,
140
+ latents=latents,
141
+ output_type=output_type,
142
+ return_dict=return_dict,
143
+ callback=callback,
144
+ callback_steps=callback_steps,
145
+ **kwargs,
146
+ )
147
+
148
+ @torch.no_grad()
149
+ def text2img_sd1_2(
150
+ self,
151
+ prompt: Union[str, List[str]],
152
+ height: int = 512,
153
+ width: int = 512,
154
+ num_inference_steps: int = 50,
155
+ guidance_scale: float = 7.5,
156
+ negative_prompt: Optional[Union[str, List[str]]] = None,
157
+ num_images_per_prompt: Optional[int] = 1,
158
+ eta: float = 0.0,
159
+ generator: Optional[torch.Generator] = None,
160
+ latents: Optional[torch.FloatTensor] = None,
161
+ output_type: Optional[str] = "pil",
162
+ return_dict: bool = True,
163
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
164
+ callback_steps: int = 1,
165
+ **kwargs,
166
+ ):
167
+ return self.pipe2(
168
+ prompt=prompt,
169
+ height=height,
170
+ width=width,
171
+ num_inference_steps=num_inference_steps,
172
+ guidance_scale=guidance_scale,
173
+ negative_prompt=negative_prompt,
174
+ num_images_per_prompt=num_images_per_prompt,
175
+ eta=eta,
176
+ generator=generator,
177
+ latents=latents,
178
+ output_type=output_type,
179
+ return_dict=return_dict,
180
+ callback=callback,
181
+ callback_steps=callback_steps,
182
+ **kwargs,
183
+ )
184
+
185
+ @torch.no_grad()
186
+ def text2img_sd1_3(
187
+ self,
188
+ prompt: Union[str, List[str]],
189
+ height: int = 512,
190
+ width: int = 512,
191
+ num_inference_steps: int = 50,
192
+ guidance_scale: float = 7.5,
193
+ negative_prompt: Optional[Union[str, List[str]]] = None,
194
+ num_images_per_prompt: Optional[int] = 1,
195
+ eta: float = 0.0,
196
+ generator: Optional[torch.Generator] = None,
197
+ latents: Optional[torch.FloatTensor] = None,
198
+ output_type: Optional[str] = "pil",
199
+ return_dict: bool = True,
200
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
201
+ callback_steps: int = 1,
202
+ **kwargs,
203
+ ):
204
+ return self.pipe3(
205
+ prompt=prompt,
206
+ height=height,
207
+ width=width,
208
+ num_inference_steps=num_inference_steps,
209
+ guidance_scale=guidance_scale,
210
+ negative_prompt=negative_prompt,
211
+ num_images_per_prompt=num_images_per_prompt,
212
+ eta=eta,
213
+ generator=generator,
214
+ latents=latents,
215
+ output_type=output_type,
216
+ return_dict=return_dict,
217
+ callback=callback,
218
+ callback_steps=callback_steps,
219
+ **kwargs,
220
+ )
221
+
222
+ @torch.no_grad()
223
+ def text2img_sd1_4(
224
+ self,
225
+ prompt: Union[str, List[str]],
226
+ height: int = 512,
227
+ width: int = 512,
228
+ num_inference_steps: int = 50,
229
+ guidance_scale: float = 7.5,
230
+ negative_prompt: Optional[Union[str, List[str]]] = None,
231
+ num_images_per_prompt: Optional[int] = 1,
232
+ eta: float = 0.0,
233
+ generator: Optional[torch.Generator] = None,
234
+ latents: Optional[torch.FloatTensor] = None,
235
+ output_type: Optional[str] = "pil",
236
+ return_dict: bool = True,
237
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
238
+ callback_steps: int = 1,
239
+ **kwargs,
240
+ ):
241
+ return self.pipe4(
242
+ prompt=prompt,
243
+ height=height,
244
+ width=width,
245
+ num_inference_steps=num_inference_steps,
246
+ guidance_scale=guidance_scale,
247
+ negative_prompt=negative_prompt,
248
+ num_images_per_prompt=num_images_per_prompt,
249
+ eta=eta,
250
+ generator=generator,
251
+ latents=latents,
252
+ output_type=output_type,
253
+ return_dict=return_dict,
254
+ callback=callback,
255
+ callback_steps=callback_steps,
256
+ **kwargs,
257
+ )
258
+
259
+ @torch.no_grad()
260
+ def _call_(
261
+ self,
262
+ prompt: Union[str, List[str]],
263
+ height: int = 512,
264
+ width: int = 512,
265
+ num_inference_steps: int = 50,
266
+ guidance_scale: float = 7.5,
267
+ negative_prompt: Optional[Union[str, List[str]]] = None,
268
+ num_images_per_prompt: Optional[int] = 1,
269
+ eta: float = 0.0,
270
+ generator: Optional[torch.Generator] = None,
271
+ latents: Optional[torch.FloatTensor] = None,
272
+ output_type: Optional[str] = "pil",
273
+ return_dict: bool = True,
274
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
275
+ callback_steps: int = 1,
276
+ **kwargs,
277
+ ):
278
+ r"""
279
+ Function invoked when calling the pipeline for generation. This function will generate 4 results as part
280
+ of running all the 4 pipelines for SD1.1-1.4 together in a serial-processing, parallel-invocation fashion.
281
+ Args:
282
+ prompt (`str` or `List[str]`):
283
+ The prompt or prompts to guide the image generation.
284
+ height (`int`, optional, defaults to 512):
285
+ The height in pixels of the generated image.
286
+ width (`int`, optional, defaults to 512):
287
+ The width in pixels of the generated image.
288
+ num_inference_steps (`int`, optional, defaults to 50):
289
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
290
+ expense of slower inference.
291
+ guidance_scale (`float`, optional, defaults to 7.5):
292
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
293
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
294
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
295
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
296
+ usually at the expense of lower image quality.
297
+ eta (`float`, optional, defaults to 0.0):
298
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
299
+ [`schedulers.DDIMScheduler`], will be ignored for others.
300
+ generator (`torch.Generator`, optional):
301
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
302
+ deterministic.
303
+ latents (`torch.FloatTensor`, optional):
304
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
305
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
306
+ tensor will ge generated by sampling using the supplied random `generator`.
307
+ output_type (`str`, optional, defaults to `"pil"`):
308
+ The output format of the generate image. Choose between
309
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
310
+ return_dict (`bool`, optional, defaults to `True`):
311
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
312
+ plain tuple.
313
+ Returns:
314
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
315
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
316
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
317
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
318
+ (nsfw) content, according to the `safety_checker`.
319
+ """
320
+
321
+ device = "cuda" if torch.cuda.is_available() else "cpu"
322
+ self.to(device)
323
+
324
+ # Checks if the height and width are divisible by 8 or not
325
+ if height % 8 != 0 or width % 8 != 0:
326
+ raise ValueError(f"`height` and `width` must be divisible by 8 but are {height} and {width}.")
327
+
328
+ # Get first result from Stable Diffusion Checkpoint v1.1
329
+ res1 = self.text2img_sd1_1(
330
+ prompt=prompt,
331
+ height=height,
332
+ width=width,
333
+ num_inference_steps=num_inference_steps,
334
+ guidance_scale=guidance_scale,
335
+ negative_prompt=negative_prompt,
336
+ num_images_per_prompt=num_images_per_prompt,
337
+ eta=eta,
338
+ generator=generator,
339
+ latents=latents,
340
+ output_type=output_type,
341
+ return_dict=return_dict,
342
+ callback=callback,
343
+ callback_steps=callback_steps,
344
+ **kwargs,
345
+ )
346
+
347
+ # Get first result from Stable Diffusion Checkpoint v1.2
348
+ res2 = self.text2img_sd1_2(
349
+ prompt=prompt,
350
+ height=height,
351
+ width=width,
352
+ num_inference_steps=num_inference_steps,
353
+ guidance_scale=guidance_scale,
354
+ negative_prompt=negative_prompt,
355
+ num_images_per_prompt=num_images_per_prompt,
356
+ eta=eta,
357
+ generator=generator,
358
+ latents=latents,
359
+ output_type=output_type,
360
+ return_dict=return_dict,
361
+ callback=callback,
362
+ callback_steps=callback_steps,
363
+ **kwargs,
364
+ )
365
+
366
+ # Get first result from Stable Diffusion Checkpoint v1.3
367
+ res3 = self.text2img_sd1_3(
368
+ prompt=prompt,
369
+ height=height,
370
+ width=width,
371
+ num_inference_steps=num_inference_steps,
372
+ guidance_scale=guidance_scale,
373
+ negative_prompt=negative_prompt,
374
+ num_images_per_prompt=num_images_per_prompt,
375
+ eta=eta,
376
+ generator=generator,
377
+ latents=latents,
378
+ output_type=output_type,
379
+ return_dict=return_dict,
380
+ callback=callback,
381
+ callback_steps=callback_steps,
382
+ **kwargs,
383
+ )
384
+
385
+ # Get first result from Stable Diffusion Checkpoint v1.4
386
+ res4 = self.text2img_sd1_4(
387
+ prompt=prompt,
388
+ height=height,
389
+ width=width,
390
+ num_inference_steps=num_inference_steps,
391
+ guidance_scale=guidance_scale,
392
+ negative_prompt=negative_prompt,
393
+ num_images_per_prompt=num_images_per_prompt,
394
+ eta=eta,
395
+ generator=generator,
396
+ latents=latents,
397
+ output_type=output_type,
398
+ return_dict=return_dict,
399
+ callback=callback,
400
+ callback_steps=callback_steps,
401
+ **kwargs,
402
+ )
403
+
404
+ # Get all result images into a single list and pass it via StableDiffusionPipelineOutput for final result
405
+ return StableDiffusionPipelineOutput([res1[0], res2[0], res3[0], res4[0]])
v0.19.2/stable_diffusion_controlnet_img2img.py ADDED
@@ -0,0 +1,989 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inspired by: https://github.com/haofanwang/ControlNet-for-Diffusers/
2
+
3
+ import inspect
4
+ from typing import Any, Callable, Dict, List, Optional, Tuple, Union
5
+
6
+ import numpy as np
7
+ import PIL.Image
8
+ import torch
9
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
10
+
11
+ from diffusers import AutoencoderKL, ControlNetModel, DiffusionPipeline, UNet2DConditionModel, logging
12
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput, StableDiffusionSafetyChecker
13
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import MultiControlNetModel
14
+ from diffusers.schedulers import KarrasDiffusionSchedulers
15
+ from diffusers.utils import (
16
+ PIL_INTERPOLATION,
17
+ is_accelerate_available,
18
+ is_accelerate_version,
19
+ randn_tensor,
20
+ replace_example_docstring,
21
+ )
22
+
23
+
24
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
25
+
26
+ EXAMPLE_DOC_STRING = """
27
+ Examples:
28
+ ```py
29
+ >>> import numpy as np
30
+ >>> import torch
31
+ >>> from PIL import Image
32
+ >>> from diffusers import ControlNetModel, UniPCMultistepScheduler
33
+ >>> from diffusers.utils import load_image
34
+
35
+ >>> input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
36
+
37
+ >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
38
+
39
+ >>> pipe_controlnet = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
40
+ "runwayml/stable-diffusion-v1-5",
41
+ controlnet=controlnet,
42
+ safety_checker=None,
43
+ torch_dtype=torch.float16
44
+ )
45
+
46
+ >>> pipe_controlnet.scheduler = UniPCMultistepScheduler.from_config(pipe_controlnet.scheduler.config)
47
+ >>> pipe_controlnet.enable_xformers_memory_efficient_attention()
48
+ >>> pipe_controlnet.enable_model_cpu_offload()
49
+
50
+ # using image with edges for our canny controlnet
51
+ >>> control_image = load_image(
52
+ "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vermeer_canny_edged.png")
53
+
54
+
55
+ >>> result_img = pipe_controlnet(controlnet_conditioning_image=control_image,
56
+ image=input_image,
57
+ prompt="an android robot, cyberpank, digitl art masterpiece",
58
+ num_inference_steps=20).images[0]
59
+
60
+ >>> result_img.show()
61
+ ```
62
+ """
63
+
64
+
65
+ def prepare_image(image):
66
+ if isinstance(image, torch.Tensor):
67
+ # Batch single image
68
+ if image.ndim == 3:
69
+ image = image.unsqueeze(0)
70
+
71
+ image = image.to(dtype=torch.float32)
72
+ else:
73
+ # preprocess image
74
+ if isinstance(image, (PIL.Image.Image, np.ndarray)):
75
+ image = [image]
76
+
77
+ if isinstance(image, list) and isinstance(image[0], PIL.Image.Image):
78
+ image = [np.array(i.convert("RGB"))[None, :] for i in image]
79
+ image = np.concatenate(image, axis=0)
80
+ elif isinstance(image, list) and isinstance(image[0], np.ndarray):
81
+ image = np.concatenate([i[None, :] for i in image], axis=0)
82
+
83
+ image = image.transpose(0, 3, 1, 2)
84
+ image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0
85
+
86
+ return image
87
+
88
+
89
+ def prepare_controlnet_conditioning_image(
90
+ controlnet_conditioning_image,
91
+ width,
92
+ height,
93
+ batch_size,
94
+ num_images_per_prompt,
95
+ device,
96
+ dtype,
97
+ do_classifier_free_guidance,
98
+ ):
99
+ if not isinstance(controlnet_conditioning_image, torch.Tensor):
100
+ if isinstance(controlnet_conditioning_image, PIL.Image.Image):
101
+ controlnet_conditioning_image = [controlnet_conditioning_image]
102
+
103
+ if isinstance(controlnet_conditioning_image[0], PIL.Image.Image):
104
+ controlnet_conditioning_image = [
105
+ np.array(i.resize((width, height), resample=PIL_INTERPOLATION["lanczos"]))[None, :]
106
+ for i in controlnet_conditioning_image
107
+ ]
108
+ controlnet_conditioning_image = np.concatenate(controlnet_conditioning_image, axis=0)
109
+ controlnet_conditioning_image = np.array(controlnet_conditioning_image).astype(np.float32) / 255.0
110
+ controlnet_conditioning_image = controlnet_conditioning_image.transpose(0, 3, 1, 2)
111
+ controlnet_conditioning_image = torch.from_numpy(controlnet_conditioning_image)
112
+ elif isinstance(controlnet_conditioning_image[0], torch.Tensor):
113
+ controlnet_conditioning_image = torch.cat(controlnet_conditioning_image, dim=0)
114
+
115
+ image_batch_size = controlnet_conditioning_image.shape[0]
116
+
117
+ if image_batch_size == 1:
118
+ repeat_by = batch_size
119
+ else:
120
+ # image batch size is the same as prompt batch size
121
+ repeat_by = num_images_per_prompt
122
+
123
+ controlnet_conditioning_image = controlnet_conditioning_image.repeat_interleave(repeat_by, dim=0)
124
+
125
+ controlnet_conditioning_image = controlnet_conditioning_image.to(device=device, dtype=dtype)
126
+
127
+ if do_classifier_free_guidance:
128
+ controlnet_conditioning_image = torch.cat([controlnet_conditioning_image] * 2)
129
+
130
+ return controlnet_conditioning_image
131
+
132
+
133
+ class StableDiffusionControlNetImg2ImgPipeline(DiffusionPipeline):
134
+ """
135
+ Inspired by: https://github.com/haofanwang/ControlNet-for-Diffusers/
136
+ """
137
+
138
+ _optional_components = ["safety_checker", "feature_extractor"]
139
+
140
+ def __init__(
141
+ self,
142
+ vae: AutoencoderKL,
143
+ text_encoder: CLIPTextModel,
144
+ tokenizer: CLIPTokenizer,
145
+ unet: UNet2DConditionModel,
146
+ controlnet: Union[ControlNetModel, List[ControlNetModel], Tuple[ControlNetModel], MultiControlNetModel],
147
+ scheduler: KarrasDiffusionSchedulers,
148
+ safety_checker: StableDiffusionSafetyChecker,
149
+ feature_extractor: CLIPImageProcessor,
150
+ requires_safety_checker: bool = True,
151
+ ):
152
+ super().__init__()
153
+
154
+ if safety_checker is None and requires_safety_checker:
155
+ logger.warning(
156
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
157
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
158
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
159
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
160
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
161
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
162
+ )
163
+
164
+ if safety_checker is not None and feature_extractor is None:
165
+ raise ValueError(
166
+ "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
167
+ " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
168
+ )
169
+
170
+ if isinstance(controlnet, (list, tuple)):
171
+ controlnet = MultiControlNetModel(controlnet)
172
+
173
+ self.register_modules(
174
+ vae=vae,
175
+ text_encoder=text_encoder,
176
+ tokenizer=tokenizer,
177
+ unet=unet,
178
+ controlnet=controlnet,
179
+ scheduler=scheduler,
180
+ safety_checker=safety_checker,
181
+ feature_extractor=feature_extractor,
182
+ )
183
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
184
+ self.register_to_config(requires_safety_checker=requires_safety_checker)
185
+
186
+ def enable_vae_slicing(self):
187
+ r"""
188
+ Enable sliced VAE decoding.
189
+
190
+ When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several
191
+ steps. This is useful to save some memory and allow larger batch sizes.
192
+ """
193
+ self.vae.enable_slicing()
194
+
195
+ def disable_vae_slicing(self):
196
+ r"""
197
+ Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to
198
+ computing decoding in one step.
199
+ """
200
+ self.vae.disable_slicing()
201
+
202
+ def enable_sequential_cpu_offload(self, gpu_id=0):
203
+ r"""
204
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
205
+ text_encoder, vae, controlnet, and safety checker have their state dicts saved to CPU and then are moved to a
206
+ `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
207
+ Note that offloading happens on a submodule basis. Memory savings are higher than with
208
+ `enable_model_cpu_offload`, but performance is lower.
209
+ """
210
+ if is_accelerate_available():
211
+ from accelerate import cpu_offload
212
+ else:
213
+ raise ImportError("Please install accelerate via `pip install accelerate`")
214
+
215
+ device = torch.device(f"cuda:{gpu_id}")
216
+
217
+ for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.controlnet]:
218
+ cpu_offload(cpu_offloaded_model, device)
219
+
220
+ if self.safety_checker is not None:
221
+ cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True)
222
+
223
+ def enable_model_cpu_offload(self, gpu_id=0):
224
+ r"""
225
+ Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
226
+ to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
227
+ method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
228
+ `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
229
+ """
230
+ if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
231
+ from accelerate import cpu_offload_with_hook
232
+ else:
233
+ raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
234
+
235
+ device = torch.device(f"cuda:{gpu_id}")
236
+
237
+ hook = None
238
+ for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]:
239
+ _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
240
+
241
+ if self.safety_checker is not None:
242
+ # the safety checker can offload the vae again
243
+ _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
244
+
245
+ # control net hook has be manually offloaded as it alternates with unet
246
+ cpu_offload_with_hook(self.controlnet, device)
247
+
248
+ # We'll offload the last model manually.
249
+ self.final_offload_hook = hook
250
+
251
+ @property
252
+ def _execution_device(self):
253
+ r"""
254
+ Returns the device on which the pipeline's models will be executed. After calling
255
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
256
+ hooks.
257
+ """
258
+ if not hasattr(self.unet, "_hf_hook"):
259
+ return self.device
260
+ for module in self.unet.modules():
261
+ if (
262
+ hasattr(module, "_hf_hook")
263
+ and hasattr(module._hf_hook, "execution_device")
264
+ and module._hf_hook.execution_device is not None
265
+ ):
266
+ return torch.device(module._hf_hook.execution_device)
267
+ return self.device
268
+
269
+ def _encode_prompt(
270
+ self,
271
+ prompt,
272
+ device,
273
+ num_images_per_prompt,
274
+ do_classifier_free_guidance,
275
+ negative_prompt=None,
276
+ prompt_embeds: Optional[torch.FloatTensor] = None,
277
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
278
+ ):
279
+ r"""
280
+ Encodes the prompt into text encoder hidden states.
281
+
282
+ Args:
283
+ prompt (`str` or `List[str]`, *optional*):
284
+ prompt to be encoded
285
+ device: (`torch.device`):
286
+ torch device
287
+ num_images_per_prompt (`int`):
288
+ number of images that should be generated per prompt
289
+ do_classifier_free_guidance (`bool`):
290
+ whether to use classifier free guidance or not
291
+ negative_prompt (`str` or `List[str]`, *optional*):
292
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
293
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
294
+ prompt_embeds (`torch.FloatTensor`, *optional*):
295
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
296
+ provided, text embeddings will be generated from `prompt` input argument.
297
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
298
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
299
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
300
+ argument.
301
+ """
302
+ if prompt is not None and isinstance(prompt, str):
303
+ batch_size = 1
304
+ elif prompt is not None and isinstance(prompt, list):
305
+ batch_size = len(prompt)
306
+ else:
307
+ batch_size = prompt_embeds.shape[0]
308
+
309
+ if prompt_embeds is None:
310
+ text_inputs = self.tokenizer(
311
+ prompt,
312
+ padding="max_length",
313
+ max_length=self.tokenizer.model_max_length,
314
+ truncation=True,
315
+ return_tensors="pt",
316
+ )
317
+ text_input_ids = text_inputs.input_ids
318
+ untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
319
+
320
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
321
+ text_input_ids, untruncated_ids
322
+ ):
323
+ removed_text = self.tokenizer.batch_decode(
324
+ untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
325
+ )
326
+ logger.warning(
327
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
328
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
329
+ )
330
+
331
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
332
+ attention_mask = text_inputs.attention_mask.to(device)
333
+ else:
334
+ attention_mask = None
335
+
336
+ prompt_embeds = self.text_encoder(
337
+ text_input_ids.to(device),
338
+ attention_mask=attention_mask,
339
+ )
340
+ prompt_embeds = prompt_embeds[0]
341
+
342
+ prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
343
+
344
+ bs_embed, seq_len, _ = prompt_embeds.shape
345
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
346
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
347
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
348
+
349
+ # get unconditional embeddings for classifier free guidance
350
+ if do_classifier_free_guidance and negative_prompt_embeds is None:
351
+ uncond_tokens: List[str]
352
+ if negative_prompt is None:
353
+ uncond_tokens = [""] * batch_size
354
+ elif type(prompt) is not type(negative_prompt):
355
+ raise TypeError(
356
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
357
+ f" {type(prompt)}."
358
+ )
359
+ elif isinstance(negative_prompt, str):
360
+ uncond_tokens = [negative_prompt]
361
+ elif batch_size != len(negative_prompt):
362
+ raise ValueError(
363
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
364
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
365
+ " the batch size of `prompt`."
366
+ )
367
+ else:
368
+ uncond_tokens = negative_prompt
369
+
370
+ max_length = prompt_embeds.shape[1]
371
+ uncond_input = self.tokenizer(
372
+ uncond_tokens,
373
+ padding="max_length",
374
+ max_length=max_length,
375
+ truncation=True,
376
+ return_tensors="pt",
377
+ )
378
+
379
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
380
+ attention_mask = uncond_input.attention_mask.to(device)
381
+ else:
382
+ attention_mask = None
383
+
384
+ negative_prompt_embeds = self.text_encoder(
385
+ uncond_input.input_ids.to(device),
386
+ attention_mask=attention_mask,
387
+ )
388
+ negative_prompt_embeds = negative_prompt_embeds[0]
389
+
390
+ if do_classifier_free_guidance:
391
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
392
+ seq_len = negative_prompt_embeds.shape[1]
393
+
394
+ negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
395
+
396
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
397
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
398
+
399
+ # For classifier free guidance, we need to do two forward passes.
400
+ # Here we concatenate the unconditional and text embeddings into a single batch
401
+ # to avoid doing two forward passes
402
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
403
+
404
+ return prompt_embeds
405
+
406
+ def run_safety_checker(self, image, device, dtype):
407
+ if self.safety_checker is not None:
408
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
409
+ image, has_nsfw_concept = self.safety_checker(
410
+ images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
411
+ )
412
+ else:
413
+ has_nsfw_concept = None
414
+ return image, has_nsfw_concept
415
+
416
+ def decode_latents(self, latents):
417
+ latents = 1 / self.vae.config.scaling_factor * latents
418
+ image = self.vae.decode(latents).sample
419
+ image = (image / 2 + 0.5).clamp(0, 1)
420
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
421
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
422
+ return image
423
+
424
+ def prepare_extra_step_kwargs(self, generator, eta):
425
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
426
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
427
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
428
+ # and should be between [0, 1]
429
+
430
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
431
+ extra_step_kwargs = {}
432
+ if accepts_eta:
433
+ extra_step_kwargs["eta"] = eta
434
+
435
+ # check if the scheduler accepts generator
436
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
437
+ if accepts_generator:
438
+ extra_step_kwargs["generator"] = generator
439
+ return extra_step_kwargs
440
+
441
+ def check_controlnet_conditioning_image(self, image, prompt, prompt_embeds):
442
+ image_is_pil = isinstance(image, PIL.Image.Image)
443
+ image_is_tensor = isinstance(image, torch.Tensor)
444
+ image_is_pil_list = isinstance(image, list) and isinstance(image[0], PIL.Image.Image)
445
+ image_is_tensor_list = isinstance(image, list) and isinstance(image[0], torch.Tensor)
446
+
447
+ if not image_is_pil and not image_is_tensor and not image_is_pil_list and not image_is_tensor_list:
448
+ raise TypeError(
449
+ "image must be passed and be one of PIL image, torch tensor, list of PIL images, or list of torch tensors"
450
+ )
451
+
452
+ if image_is_pil:
453
+ image_batch_size = 1
454
+ elif image_is_tensor:
455
+ image_batch_size = image.shape[0]
456
+ elif image_is_pil_list:
457
+ image_batch_size = len(image)
458
+ elif image_is_tensor_list:
459
+ image_batch_size = len(image)
460
+ else:
461
+ raise ValueError("controlnet condition image is not valid")
462
+
463
+ if prompt is not None and isinstance(prompt, str):
464
+ prompt_batch_size = 1
465
+ elif prompt is not None and isinstance(prompt, list):
466
+ prompt_batch_size = len(prompt)
467
+ elif prompt_embeds is not None:
468
+ prompt_batch_size = prompt_embeds.shape[0]
469
+ else:
470
+ raise ValueError("prompt or prompt_embeds are not valid")
471
+
472
+ if image_batch_size != 1 and image_batch_size != prompt_batch_size:
473
+ raise ValueError(
474
+ f"If image batch size is not 1, image batch size must be same as prompt batch size. image batch size: {image_batch_size}, prompt batch size: {prompt_batch_size}"
475
+ )
476
+
477
+ def check_inputs(
478
+ self,
479
+ prompt,
480
+ image,
481
+ controlnet_conditioning_image,
482
+ height,
483
+ width,
484
+ callback_steps,
485
+ negative_prompt=None,
486
+ prompt_embeds=None,
487
+ negative_prompt_embeds=None,
488
+ strength=None,
489
+ controlnet_guidance_start=None,
490
+ controlnet_guidance_end=None,
491
+ controlnet_conditioning_scale=None,
492
+ ):
493
+ if height % 8 != 0 or width % 8 != 0:
494
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
495
+
496
+ if (callback_steps is None) or (
497
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
498
+ ):
499
+ raise ValueError(
500
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
501
+ f" {type(callback_steps)}."
502
+ )
503
+
504
+ if prompt is not None and prompt_embeds is not None:
505
+ raise ValueError(
506
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
507
+ " only forward one of the two."
508
+ )
509
+ elif prompt is None and prompt_embeds is None:
510
+ raise ValueError(
511
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
512
+ )
513
+ elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
514
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
515
+
516
+ if negative_prompt is not None and negative_prompt_embeds is not None:
517
+ raise ValueError(
518
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
519
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
520
+ )
521
+
522
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
523
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
524
+ raise ValueError(
525
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
526
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
527
+ f" {negative_prompt_embeds.shape}."
528
+ )
529
+
530
+ # check controlnet condition image
531
+
532
+ if isinstance(self.controlnet, ControlNetModel):
533
+ self.check_controlnet_conditioning_image(controlnet_conditioning_image, prompt, prompt_embeds)
534
+ elif isinstance(self.controlnet, MultiControlNetModel):
535
+ if not isinstance(controlnet_conditioning_image, list):
536
+ raise TypeError("For multiple controlnets: `image` must be type `list`")
537
+
538
+ if len(controlnet_conditioning_image) != len(self.controlnet.nets):
539
+ raise ValueError(
540
+ "For multiple controlnets: `image` must have the same length as the number of controlnets."
541
+ )
542
+
543
+ for image_ in controlnet_conditioning_image:
544
+ self.check_controlnet_conditioning_image(image_, prompt, prompt_embeds)
545
+ else:
546
+ assert False
547
+
548
+ # Check `controlnet_conditioning_scale`
549
+
550
+ if isinstance(self.controlnet, ControlNetModel):
551
+ if not isinstance(controlnet_conditioning_scale, float):
552
+ raise TypeError("For single controlnet: `controlnet_conditioning_scale` must be type `float`.")
553
+ elif isinstance(self.controlnet, MultiControlNetModel):
554
+ if isinstance(controlnet_conditioning_scale, list) and len(controlnet_conditioning_scale) != len(
555
+ self.controlnet.nets
556
+ ):
557
+ raise ValueError(
558
+ "For multiple controlnets: When `controlnet_conditioning_scale` is specified as `list`, it must have"
559
+ " the same length as the number of controlnets"
560
+ )
561
+ else:
562
+ assert False
563
+
564
+ if isinstance(image, torch.Tensor):
565
+ if image.ndim != 3 and image.ndim != 4:
566
+ raise ValueError("`image` must have 3 or 4 dimensions")
567
+
568
+ if image.ndim == 3:
569
+ image_batch_size = 1
570
+ image_channels, image_height, image_width = image.shape
571
+ elif image.ndim == 4:
572
+ image_batch_size, image_channels, image_height, image_width = image.shape
573
+ else:
574
+ assert False
575
+
576
+ if image_channels != 3:
577
+ raise ValueError("`image` must have 3 channels")
578
+
579
+ if image.min() < -1 or image.max() > 1:
580
+ raise ValueError("`image` should be in range [-1, 1]")
581
+
582
+ if self.vae.config.latent_channels != self.unet.config.in_channels:
583
+ raise ValueError(
584
+ f"The config of `pipeline.unet` expects {self.unet.config.in_channels} but received"
585
+ f" latent channels: {self.vae.config.latent_channels},"
586
+ f" Please verify the config of `pipeline.unet` and the `pipeline.vae`"
587
+ )
588
+
589
+ if strength < 0 or strength > 1:
590
+ raise ValueError(f"The value of `strength` should in [0.0, 1.0] but is {strength}")
591
+
592
+ if controlnet_guidance_start < 0 or controlnet_guidance_start > 1:
593
+ raise ValueError(
594
+ f"The value of `controlnet_guidance_start` should in [0.0, 1.0] but is {controlnet_guidance_start}"
595
+ )
596
+
597
+ if controlnet_guidance_end < 0 or controlnet_guidance_end > 1:
598
+ raise ValueError(
599
+ f"The value of `controlnet_guidance_end` should in [0.0, 1.0] but is {controlnet_guidance_end}"
600
+ )
601
+
602
+ if controlnet_guidance_start > controlnet_guidance_end:
603
+ raise ValueError(
604
+ "The value of `controlnet_guidance_start` should be less than `controlnet_guidance_end`, but got"
605
+ f" `controlnet_guidance_start` {controlnet_guidance_start} >= `controlnet_guidance_end` {controlnet_guidance_end}"
606
+ )
607
+
608
+ def get_timesteps(self, num_inference_steps, strength, device):
609
+ # get the original timestep using init_timestep
610
+ init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
611
+
612
+ t_start = max(num_inference_steps - init_timestep, 0)
613
+ timesteps = self.scheduler.timesteps[t_start:]
614
+
615
+ return timesteps, num_inference_steps - t_start
616
+
617
+ def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None):
618
+ if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)):
619
+ raise ValueError(
620
+ f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}"
621
+ )
622
+
623
+ image = image.to(device=device, dtype=dtype)
624
+
625
+ batch_size = batch_size * num_images_per_prompt
626
+ if isinstance(generator, list) and len(generator) != batch_size:
627
+ raise ValueError(
628
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
629
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
630
+ )
631
+
632
+ if isinstance(generator, list):
633
+ init_latents = [
634
+ self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size)
635
+ ]
636
+ init_latents = torch.cat(init_latents, dim=0)
637
+ else:
638
+ init_latents = self.vae.encode(image).latent_dist.sample(generator)
639
+
640
+ init_latents = self.vae.config.scaling_factor * init_latents
641
+
642
+ if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0:
643
+ raise ValueError(
644
+ f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts."
645
+ )
646
+ else:
647
+ init_latents = torch.cat([init_latents], dim=0)
648
+
649
+ shape = init_latents.shape
650
+ noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
651
+
652
+ # get latents
653
+ init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
654
+ latents = init_latents
655
+
656
+ return latents
657
+
658
+ def _default_height_width(self, height, width, image):
659
+ if isinstance(image, list):
660
+ image = image[0]
661
+
662
+ if height is None:
663
+ if isinstance(image, PIL.Image.Image):
664
+ height = image.height
665
+ elif isinstance(image, torch.Tensor):
666
+ height = image.shape[3]
667
+
668
+ height = (height // 8) * 8 # round down to nearest multiple of 8
669
+
670
+ if width is None:
671
+ if isinstance(image, PIL.Image.Image):
672
+ width = image.width
673
+ elif isinstance(image, torch.Tensor):
674
+ width = image.shape[2]
675
+
676
+ width = (width // 8) * 8 # round down to nearest multiple of 8
677
+
678
+ return height, width
679
+
680
+ @torch.no_grad()
681
+ @replace_example_docstring(EXAMPLE_DOC_STRING)
682
+ def __call__(
683
+ self,
684
+ prompt: Union[str, List[str]] = None,
685
+ image: Union[torch.Tensor, PIL.Image.Image] = None,
686
+ controlnet_conditioning_image: Union[
687
+ torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image]
688
+ ] = None,
689
+ strength: float = 0.8,
690
+ height: Optional[int] = None,
691
+ width: Optional[int] = None,
692
+ num_inference_steps: int = 50,
693
+ guidance_scale: float = 7.5,
694
+ negative_prompt: Optional[Union[str, List[str]]] = None,
695
+ num_images_per_prompt: Optional[int] = 1,
696
+ eta: float = 0.0,
697
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
698
+ latents: Optional[torch.FloatTensor] = None,
699
+ prompt_embeds: Optional[torch.FloatTensor] = None,
700
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
701
+ output_type: Optional[str] = "pil",
702
+ return_dict: bool = True,
703
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
704
+ callback_steps: int = 1,
705
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
706
+ controlnet_conditioning_scale: Union[float, List[float]] = 1.0,
707
+ controlnet_guidance_start: float = 0.0,
708
+ controlnet_guidance_end: float = 1.0,
709
+ ):
710
+ r"""
711
+ Function invoked when calling the pipeline for generation.
712
+
713
+ Args:
714
+ prompt (`str` or `List[str]`, *optional*):
715
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
716
+ instead.
717
+ image (`torch.Tensor` or `PIL.Image.Image`):
718
+ `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
719
+ be masked out with `mask_image` and repainted according to `prompt`.
720
+ controlnet_conditioning_image (`torch.FloatTensor`, `PIL.Image.Image`, `List[torch.FloatTensor]` or `List[PIL.Image.Image]`):
721
+ The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If
722
+ the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. PIL.Image.Image` can
723
+ also be accepted as an image. The control image is automatically resized to fit the output image.
724
+ strength (`float`, *optional*):
725
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
726
+ will be used as a starting point, adding more noise to it the larger the `strength`. The number of
727
+ denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
728
+ be maximum and the denoising process will run for the full number of iterations specified in
729
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
730
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
731
+ The height in pixels of the generated image.
732
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
733
+ The width in pixels of the generated image.
734
+ num_inference_steps (`int`, *optional*, defaults to 50):
735
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
736
+ expense of slower inference.
737
+ guidance_scale (`float`, *optional*, defaults to 7.5):
738
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
739
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
740
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
741
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
742
+ usually at the expense of lower image quality.
743
+ negative_prompt (`str` or `List[str]`, *optional*):
744
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
745
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
746
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
747
+ The number of images to generate per prompt.
748
+ eta (`float`, *optional*, defaults to 0.0):
749
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
750
+ [`schedulers.DDIMScheduler`], will be ignored for others.
751
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
752
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
753
+ to make generation deterministic.
754
+ latents (`torch.FloatTensor`, *optional*):
755
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
756
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
757
+ tensor will ge generated by sampling using the supplied random `generator`.
758
+ prompt_embeds (`torch.FloatTensor`, *optional*):
759
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
760
+ provided, text embeddings will be generated from `prompt` input argument.
761
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
762
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
763
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
764
+ argument.
765
+ output_type (`str`, *optional*, defaults to `"pil"`):
766
+ The output format of the generate image. Choose between
767
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
768
+ return_dict (`bool`, *optional*, defaults to `True`):
769
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
770
+ plain tuple.
771
+ callback (`Callable`, *optional*):
772
+ A function that will be called every `callback_steps` steps during inference. The function will be
773
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
774
+ callback_steps (`int`, *optional*, defaults to 1):
775
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
776
+ called at every step.
777
+ cross_attention_kwargs (`dict`, *optional*):
778
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
779
+ `self.processor` in
780
+ [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
781
+ controlnet_conditioning_scale (`float`, *optional*, defaults to 1.0):
782
+ The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added
783
+ to the residual in the original unet.
784
+ controlnet_guidance_start ('float', *optional*, defaults to 0.0):
785
+ The percentage of total steps the controlnet starts applying. Must be between 0 and 1.
786
+ controlnet_guidance_end ('float', *optional*, defaults to 1.0):
787
+ The percentage of total steps the controlnet ends applying. Must be between 0 and 1. Must be greater
788
+ than `controlnet_guidance_start`.
789
+
790
+ Examples:
791
+
792
+ Returns:
793
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
794
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
795
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
796
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
797
+ (nsfw) content, according to the `safety_checker`.
798
+ """
799
+ # 0. Default height and width to unet
800
+ height, width = self._default_height_width(height, width, controlnet_conditioning_image)
801
+
802
+ # 1. Check inputs. Raise error if not correct
803
+ self.check_inputs(
804
+ prompt,
805
+ image,
806
+ controlnet_conditioning_image,
807
+ height,
808
+ width,
809
+ callback_steps,
810
+ negative_prompt,
811
+ prompt_embeds,
812
+ negative_prompt_embeds,
813
+ strength,
814
+ controlnet_guidance_start,
815
+ controlnet_guidance_end,
816
+ controlnet_conditioning_scale,
817
+ )
818
+
819
+ # 2. Define call parameters
820
+ if prompt is not None and isinstance(prompt, str):
821
+ batch_size = 1
822
+ elif prompt is not None and isinstance(prompt, list):
823
+ batch_size = len(prompt)
824
+ else:
825
+ batch_size = prompt_embeds.shape[0]
826
+
827
+ device = self._execution_device
828
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
829
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
830
+ # corresponds to doing no classifier free guidance.
831
+ do_classifier_free_guidance = guidance_scale > 1.0
832
+
833
+ if isinstance(self.controlnet, MultiControlNetModel) and isinstance(controlnet_conditioning_scale, float):
834
+ controlnet_conditioning_scale = [controlnet_conditioning_scale] * len(self.controlnet.nets)
835
+
836
+ # 3. Encode input prompt
837
+ prompt_embeds = self._encode_prompt(
838
+ prompt,
839
+ device,
840
+ num_images_per_prompt,
841
+ do_classifier_free_guidance,
842
+ negative_prompt,
843
+ prompt_embeds=prompt_embeds,
844
+ negative_prompt_embeds=negative_prompt_embeds,
845
+ )
846
+
847
+ # 4. Prepare image, and controlnet_conditioning_image
848
+ image = prepare_image(image)
849
+
850
+ # condition image(s)
851
+ if isinstance(self.controlnet, ControlNetModel):
852
+ controlnet_conditioning_image = prepare_controlnet_conditioning_image(
853
+ controlnet_conditioning_image=controlnet_conditioning_image,
854
+ width=width,
855
+ height=height,
856
+ batch_size=batch_size * num_images_per_prompt,
857
+ num_images_per_prompt=num_images_per_prompt,
858
+ device=device,
859
+ dtype=self.controlnet.dtype,
860
+ do_classifier_free_guidance=do_classifier_free_guidance,
861
+ )
862
+ elif isinstance(self.controlnet, MultiControlNetModel):
863
+ controlnet_conditioning_images = []
864
+
865
+ for image_ in controlnet_conditioning_image:
866
+ image_ = prepare_controlnet_conditioning_image(
867
+ controlnet_conditioning_image=image_,
868
+ width=width,
869
+ height=height,
870
+ batch_size=batch_size * num_images_per_prompt,
871
+ num_images_per_prompt=num_images_per_prompt,
872
+ device=device,
873
+ dtype=self.controlnet.dtype,
874
+ do_classifier_free_guidance=do_classifier_free_guidance,
875
+ )
876
+
877
+ controlnet_conditioning_images.append(image_)
878
+
879
+ controlnet_conditioning_image = controlnet_conditioning_images
880
+ else:
881
+ assert False
882
+
883
+ # 5. Prepare timesteps
884
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
885
+ timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device)
886
+ latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
887
+
888
+ # 6. Prepare latent variables
889
+ latents = self.prepare_latents(
890
+ image,
891
+ latent_timestep,
892
+ batch_size,
893
+ num_images_per_prompt,
894
+ prompt_embeds.dtype,
895
+ device,
896
+ generator,
897
+ )
898
+
899
+ # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
900
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
901
+
902
+ # 8. Denoising loop
903
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
904
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
905
+ for i, t in enumerate(timesteps):
906
+ # expand the latents if we are doing classifier free guidance
907
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
908
+
909
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
910
+
911
+ # compute the percentage of total steps we are at
912
+ current_sampling_percent = i / len(timesteps)
913
+
914
+ if (
915
+ current_sampling_percent < controlnet_guidance_start
916
+ or current_sampling_percent > controlnet_guidance_end
917
+ ):
918
+ # do not apply the controlnet
919
+ down_block_res_samples = None
920
+ mid_block_res_sample = None
921
+ else:
922
+ # apply the controlnet
923
+ down_block_res_samples, mid_block_res_sample = self.controlnet(
924
+ latent_model_input,
925
+ t,
926
+ encoder_hidden_states=prompt_embeds,
927
+ controlnet_cond=controlnet_conditioning_image,
928
+ conditioning_scale=controlnet_conditioning_scale,
929
+ return_dict=False,
930
+ )
931
+
932
+ # predict the noise residual
933
+ noise_pred = self.unet(
934
+ latent_model_input,
935
+ t,
936
+ encoder_hidden_states=prompt_embeds,
937
+ cross_attention_kwargs=cross_attention_kwargs,
938
+ down_block_additional_residuals=down_block_res_samples,
939
+ mid_block_additional_residual=mid_block_res_sample,
940
+ ).sample
941
+
942
+ # perform guidance
943
+ if do_classifier_free_guidance:
944
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
945
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
946
+
947
+ # compute the previous noisy sample x_t -> x_t-1
948
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
949
+
950
+ # call the callback, if provided
951
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
952
+ progress_bar.update()
953
+ if callback is not None and i % callback_steps == 0:
954
+ callback(i, t, latents)
955
+
956
+ # If we do sequential model offloading, let's offload unet and controlnet
957
+ # manually for max memory savings
958
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
959
+ self.unet.to("cpu")
960
+ self.controlnet.to("cpu")
961
+ torch.cuda.empty_cache()
962
+
963
+ if output_type == "latent":
964
+ image = latents
965
+ has_nsfw_concept = None
966
+ elif output_type == "pil":
967
+ # 8. Post-processing
968
+ image = self.decode_latents(latents)
969
+
970
+ # 9. Run safety checker
971
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
972
+
973
+ # 10. Convert to PIL
974
+ image = self.numpy_to_pil(image)
975
+ else:
976
+ # 8. Post-processing
977
+ image = self.decode_latents(latents)
978
+
979
+ # 9. Run safety checker
980
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
981
+
982
+ # Offload last model to CPU
983
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
984
+ self.final_offload_hook.offload()
985
+
986
+ if not return_dict:
987
+ return (image, has_nsfw_concept)
988
+
989
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/stable_diffusion_controlnet_inpaint.py ADDED
@@ -0,0 +1,1138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inspired by: https://github.com/haofanwang/ControlNet-for-Diffusers/
2
+
3
+ import inspect
4
+ from typing import Any, Callable, Dict, List, Optional, Tuple, Union
5
+
6
+ import numpy as np
7
+ import PIL.Image
8
+ import torch
9
+ import torch.nn.functional as F
10
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
11
+
12
+ from diffusers import AutoencoderKL, ControlNetModel, DiffusionPipeline, UNet2DConditionModel, logging
13
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput, StableDiffusionSafetyChecker
14
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import MultiControlNetModel
15
+ from diffusers.schedulers import KarrasDiffusionSchedulers
16
+ from diffusers.utils import (
17
+ PIL_INTERPOLATION,
18
+ is_accelerate_available,
19
+ is_accelerate_version,
20
+ randn_tensor,
21
+ replace_example_docstring,
22
+ )
23
+
24
+
25
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
26
+
27
+ EXAMPLE_DOC_STRING = """
28
+ Examples:
29
+ ```py
30
+ >>> import numpy as np
31
+ >>> import torch
32
+ >>> from PIL import Image
33
+ >>> from stable_diffusion_controlnet_inpaint import StableDiffusionControlNetInpaintPipeline
34
+
35
+ >>> from transformers import AutoImageProcessor, UperNetForSemanticSegmentation
36
+ >>> from diffusers import ControlNetModel, UniPCMultistepScheduler
37
+ >>> from diffusers.utils import load_image
38
+
39
+ >>> def ade_palette():
40
+ return [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50],
41
+ [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255],
42
+ [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7],
43
+ [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82],
44
+ [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3],
45
+ [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255],
46
+ [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220],
47
+ [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224],
48
+ [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255],
49
+ [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7],
50
+ [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153],
51
+ [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255],
52
+ [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0],
53
+ [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255],
54
+ [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255],
55
+ [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255],
56
+ [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0],
57
+ [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0],
58
+ [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255],
59
+ [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255],
60
+ [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20],
61
+ [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255],
62
+ [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255],
63
+ [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255],
64
+ [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0],
65
+ [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0],
66
+ [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255],
67
+ [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112],
68
+ [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160],
69
+ [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163],
70
+ [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0],
71
+ [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0],
72
+ [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255],
73
+ [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204],
74
+ [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255],
75
+ [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255],
76
+ [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194],
77
+ [102, 255, 0], [92, 0, 255]]
78
+
79
+ >>> image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-small")
80
+ >>> image_segmentor = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-convnext-small")
81
+
82
+ >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-seg", torch_dtype=torch.float16)
83
+
84
+ >>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained(
85
+ "runwayml/stable-diffusion-inpainting", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
86
+ )
87
+
88
+ >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
89
+ >>> pipe.enable_xformers_memory_efficient_attention()
90
+ >>> pipe.enable_model_cpu_offload()
91
+
92
+ >>> def image_to_seg(image):
93
+ pixel_values = image_processor(image, return_tensors="pt").pixel_values
94
+ with torch.no_grad():
95
+ outputs = image_segmentor(pixel_values)
96
+ seg = image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
97
+ color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) # height, width, 3
98
+ palette = np.array(ade_palette())
99
+ for label, color in enumerate(palette):
100
+ color_seg[seg == label, :] = color
101
+ color_seg = color_seg.astype(np.uint8)
102
+ seg_image = Image.fromarray(color_seg)
103
+ return seg_image
104
+
105
+ >>> image = load_image(
106
+ "https://github.com/CompVis/latent-diffusion/raw/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
107
+ )
108
+
109
+ >>> mask_image = load_image(
110
+ "https://github.com/CompVis/latent-diffusion/raw/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
111
+ )
112
+
113
+ >>> controlnet_conditioning_image = image_to_seg(image)
114
+
115
+ >>> image = pipe(
116
+ "Face of a yellow cat, high resolution, sitting on a park bench",
117
+ image,
118
+ mask_image,
119
+ controlnet_conditioning_image,
120
+ num_inference_steps=20,
121
+ ).images[0]
122
+
123
+ >>> image.save("out.png")
124
+ ```
125
+ """
126
+
127
+
128
+ def prepare_image(image):
129
+ if isinstance(image, torch.Tensor):
130
+ # Batch single image
131
+ if image.ndim == 3:
132
+ image = image.unsqueeze(0)
133
+
134
+ image = image.to(dtype=torch.float32)
135
+ else:
136
+ # preprocess image
137
+ if isinstance(image, (PIL.Image.Image, np.ndarray)):
138
+ image = [image]
139
+
140
+ if isinstance(image, list) and isinstance(image[0], PIL.Image.Image):
141
+ image = [np.array(i.convert("RGB"))[None, :] for i in image]
142
+ image = np.concatenate(image, axis=0)
143
+ elif isinstance(image, list) and isinstance(image[0], np.ndarray):
144
+ image = np.concatenate([i[None, :] for i in image], axis=0)
145
+
146
+ image = image.transpose(0, 3, 1, 2)
147
+ image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0
148
+
149
+ return image
150
+
151
+
152
+ def prepare_mask_image(mask_image):
153
+ if isinstance(mask_image, torch.Tensor):
154
+ if mask_image.ndim == 2:
155
+ # Batch and add channel dim for single mask
156
+ mask_image = mask_image.unsqueeze(0).unsqueeze(0)
157
+ elif mask_image.ndim == 3 and mask_image.shape[0] == 1:
158
+ # Single mask, the 0'th dimension is considered to be
159
+ # the existing batch size of 1
160
+ mask_image = mask_image.unsqueeze(0)
161
+ elif mask_image.ndim == 3 and mask_image.shape[0] != 1:
162
+ # Batch of mask, the 0'th dimension is considered to be
163
+ # the batching dimension
164
+ mask_image = mask_image.unsqueeze(1)
165
+
166
+ # Binarize mask
167
+ mask_image[mask_image < 0.5] = 0
168
+ mask_image[mask_image >= 0.5] = 1
169
+ else:
170
+ # preprocess mask
171
+ if isinstance(mask_image, (PIL.Image.Image, np.ndarray)):
172
+ mask_image = [mask_image]
173
+
174
+ if isinstance(mask_image, list) and isinstance(mask_image[0], PIL.Image.Image):
175
+ mask_image = np.concatenate([np.array(m.convert("L"))[None, None, :] for m in mask_image], axis=0)
176
+ mask_image = mask_image.astype(np.float32) / 255.0
177
+ elif isinstance(mask_image, list) and isinstance(mask_image[0], np.ndarray):
178
+ mask_image = np.concatenate([m[None, None, :] for m in mask_image], axis=0)
179
+
180
+ mask_image[mask_image < 0.5] = 0
181
+ mask_image[mask_image >= 0.5] = 1
182
+ mask_image = torch.from_numpy(mask_image)
183
+
184
+ return mask_image
185
+
186
+
187
+ def prepare_controlnet_conditioning_image(
188
+ controlnet_conditioning_image,
189
+ width,
190
+ height,
191
+ batch_size,
192
+ num_images_per_prompt,
193
+ device,
194
+ dtype,
195
+ do_classifier_free_guidance,
196
+ ):
197
+ if not isinstance(controlnet_conditioning_image, torch.Tensor):
198
+ if isinstance(controlnet_conditioning_image, PIL.Image.Image):
199
+ controlnet_conditioning_image = [controlnet_conditioning_image]
200
+
201
+ if isinstance(controlnet_conditioning_image[0], PIL.Image.Image):
202
+ controlnet_conditioning_image = [
203
+ np.array(i.resize((width, height), resample=PIL_INTERPOLATION["lanczos"]))[None, :]
204
+ for i in controlnet_conditioning_image
205
+ ]
206
+ controlnet_conditioning_image = np.concatenate(controlnet_conditioning_image, axis=0)
207
+ controlnet_conditioning_image = np.array(controlnet_conditioning_image).astype(np.float32) / 255.0
208
+ controlnet_conditioning_image = controlnet_conditioning_image.transpose(0, 3, 1, 2)
209
+ controlnet_conditioning_image = torch.from_numpy(controlnet_conditioning_image)
210
+ elif isinstance(controlnet_conditioning_image[0], torch.Tensor):
211
+ controlnet_conditioning_image = torch.cat(controlnet_conditioning_image, dim=0)
212
+
213
+ image_batch_size = controlnet_conditioning_image.shape[0]
214
+
215
+ if image_batch_size == 1:
216
+ repeat_by = batch_size
217
+ else:
218
+ # image batch size is the same as prompt batch size
219
+ repeat_by = num_images_per_prompt
220
+
221
+ controlnet_conditioning_image = controlnet_conditioning_image.repeat_interleave(repeat_by, dim=0)
222
+
223
+ controlnet_conditioning_image = controlnet_conditioning_image.to(device=device, dtype=dtype)
224
+
225
+ if do_classifier_free_guidance:
226
+ controlnet_conditioning_image = torch.cat([controlnet_conditioning_image] * 2)
227
+
228
+ return controlnet_conditioning_image
229
+
230
+
231
+ class StableDiffusionControlNetInpaintPipeline(DiffusionPipeline):
232
+ """
233
+ Inspired by: https://github.com/haofanwang/ControlNet-for-Diffusers/
234
+ """
235
+
236
+ _optional_components = ["safety_checker", "feature_extractor"]
237
+
238
+ def __init__(
239
+ self,
240
+ vae: AutoencoderKL,
241
+ text_encoder: CLIPTextModel,
242
+ tokenizer: CLIPTokenizer,
243
+ unet: UNet2DConditionModel,
244
+ controlnet: Union[ControlNetModel, List[ControlNetModel], Tuple[ControlNetModel], MultiControlNetModel],
245
+ scheduler: KarrasDiffusionSchedulers,
246
+ safety_checker: StableDiffusionSafetyChecker,
247
+ feature_extractor: CLIPImageProcessor,
248
+ requires_safety_checker: bool = True,
249
+ ):
250
+ super().__init__()
251
+
252
+ if safety_checker is None and requires_safety_checker:
253
+ logger.warning(
254
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
255
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
256
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
257
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
258
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
259
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
260
+ )
261
+
262
+ if safety_checker is not None and feature_extractor is None:
263
+ raise ValueError(
264
+ "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
265
+ " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
266
+ )
267
+
268
+ if isinstance(controlnet, (list, tuple)):
269
+ controlnet = MultiControlNetModel(controlnet)
270
+
271
+ self.register_modules(
272
+ vae=vae,
273
+ text_encoder=text_encoder,
274
+ tokenizer=tokenizer,
275
+ unet=unet,
276
+ controlnet=controlnet,
277
+ scheduler=scheduler,
278
+ safety_checker=safety_checker,
279
+ feature_extractor=feature_extractor,
280
+ )
281
+
282
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
283
+ self.register_to_config(requires_safety_checker=requires_safety_checker)
284
+
285
+ def enable_vae_slicing(self):
286
+ r"""
287
+ Enable sliced VAE decoding.
288
+
289
+ When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several
290
+ steps. This is useful to save some memory and allow larger batch sizes.
291
+ """
292
+ self.vae.enable_slicing()
293
+
294
+ def disable_vae_slicing(self):
295
+ r"""
296
+ Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to
297
+ computing decoding in one step.
298
+ """
299
+ self.vae.disable_slicing()
300
+
301
+ def enable_sequential_cpu_offload(self, gpu_id=0):
302
+ r"""
303
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
304
+ text_encoder, vae, controlnet, and safety checker have their state dicts saved to CPU and then are moved to a
305
+ `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
306
+ Note that offloading happens on a submodule basis. Memory savings are higher than with
307
+ `enable_model_cpu_offload`, but performance is lower.
308
+ """
309
+ if is_accelerate_available():
310
+ from accelerate import cpu_offload
311
+ else:
312
+ raise ImportError("Please install accelerate via `pip install accelerate`")
313
+
314
+ device = torch.device(f"cuda:{gpu_id}")
315
+
316
+ for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.controlnet]:
317
+ cpu_offload(cpu_offloaded_model, device)
318
+
319
+ if self.safety_checker is not None:
320
+ cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True)
321
+
322
+ def enable_model_cpu_offload(self, gpu_id=0):
323
+ r"""
324
+ Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
325
+ to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
326
+ method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
327
+ `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
328
+ """
329
+ if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
330
+ from accelerate import cpu_offload_with_hook
331
+ else:
332
+ raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
333
+
334
+ device = torch.device(f"cuda:{gpu_id}")
335
+
336
+ hook = None
337
+ for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]:
338
+ _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
339
+
340
+ if self.safety_checker is not None:
341
+ # the safety checker can offload the vae again
342
+ _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
343
+
344
+ # control net hook has be manually offloaded as it alternates with unet
345
+ cpu_offload_with_hook(self.controlnet, device)
346
+
347
+ # We'll offload the last model manually.
348
+ self.final_offload_hook = hook
349
+
350
+ @property
351
+ def _execution_device(self):
352
+ r"""
353
+ Returns the device on which the pipeline's models will be executed. After calling
354
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
355
+ hooks.
356
+ """
357
+ if not hasattr(self.unet, "_hf_hook"):
358
+ return self.device
359
+ for module in self.unet.modules():
360
+ if (
361
+ hasattr(module, "_hf_hook")
362
+ and hasattr(module._hf_hook, "execution_device")
363
+ and module._hf_hook.execution_device is not None
364
+ ):
365
+ return torch.device(module._hf_hook.execution_device)
366
+ return self.device
367
+
368
+ def _encode_prompt(
369
+ self,
370
+ prompt,
371
+ device,
372
+ num_images_per_prompt,
373
+ do_classifier_free_guidance,
374
+ negative_prompt=None,
375
+ prompt_embeds: Optional[torch.FloatTensor] = None,
376
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
377
+ ):
378
+ r"""
379
+ Encodes the prompt into text encoder hidden states.
380
+
381
+ Args:
382
+ prompt (`str` or `List[str]`, *optional*):
383
+ prompt to be encoded
384
+ device: (`torch.device`):
385
+ torch device
386
+ num_images_per_prompt (`int`):
387
+ number of images that should be generated per prompt
388
+ do_classifier_free_guidance (`bool`):
389
+ whether to use classifier free guidance or not
390
+ negative_prompt (`str` or `List[str]`, *optional*):
391
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead.
392
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
393
+ prompt_embeds (`torch.FloatTensor`, *optional*):
394
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
395
+ provided, text embeddings will be generated from `prompt` input argument.
396
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
397
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
398
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
399
+ argument.
400
+ """
401
+ if prompt is not None and isinstance(prompt, str):
402
+ batch_size = 1
403
+ elif prompt is not None and isinstance(prompt, list):
404
+ batch_size = len(prompt)
405
+ else:
406
+ batch_size = prompt_embeds.shape[0]
407
+
408
+ if prompt_embeds is None:
409
+ text_inputs = self.tokenizer(
410
+ prompt,
411
+ padding="max_length",
412
+ max_length=self.tokenizer.model_max_length,
413
+ truncation=True,
414
+ return_tensors="pt",
415
+ )
416
+ text_input_ids = text_inputs.input_ids
417
+ untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
418
+
419
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
420
+ text_input_ids, untruncated_ids
421
+ ):
422
+ removed_text = self.tokenizer.batch_decode(
423
+ untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
424
+ )
425
+ logger.warning(
426
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
427
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
428
+ )
429
+
430
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
431
+ attention_mask = text_inputs.attention_mask.to(device)
432
+ else:
433
+ attention_mask = None
434
+
435
+ prompt_embeds = self.text_encoder(
436
+ text_input_ids.to(device),
437
+ attention_mask=attention_mask,
438
+ )
439
+ prompt_embeds = prompt_embeds[0]
440
+
441
+ prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
442
+
443
+ bs_embed, seq_len, _ = prompt_embeds.shape
444
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
445
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
446
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
447
+
448
+ # get unconditional embeddings for classifier free guidance
449
+ if do_classifier_free_guidance and negative_prompt_embeds is None:
450
+ uncond_tokens: List[str]
451
+ if negative_prompt is None:
452
+ uncond_tokens = [""] * batch_size
453
+ elif type(prompt) is not type(negative_prompt):
454
+ raise TypeError(
455
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
456
+ f" {type(prompt)}."
457
+ )
458
+ elif isinstance(negative_prompt, str):
459
+ uncond_tokens = [negative_prompt]
460
+ elif batch_size != len(negative_prompt):
461
+ raise ValueError(
462
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
463
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
464
+ " the batch size of `prompt`."
465
+ )
466
+ else:
467
+ uncond_tokens = negative_prompt
468
+
469
+ max_length = prompt_embeds.shape[1]
470
+ uncond_input = self.tokenizer(
471
+ uncond_tokens,
472
+ padding="max_length",
473
+ max_length=max_length,
474
+ truncation=True,
475
+ return_tensors="pt",
476
+ )
477
+
478
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
479
+ attention_mask = uncond_input.attention_mask.to(device)
480
+ else:
481
+ attention_mask = None
482
+
483
+ negative_prompt_embeds = self.text_encoder(
484
+ uncond_input.input_ids.to(device),
485
+ attention_mask=attention_mask,
486
+ )
487
+ negative_prompt_embeds = negative_prompt_embeds[0]
488
+
489
+ if do_classifier_free_guidance:
490
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
491
+ seq_len = negative_prompt_embeds.shape[1]
492
+
493
+ negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
494
+
495
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
496
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
497
+
498
+ # For classifier free guidance, we need to do two forward passes.
499
+ # Here we concatenate the unconditional and text embeddings into a single batch
500
+ # to avoid doing two forward passes
501
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
502
+
503
+ return prompt_embeds
504
+
505
+ def run_safety_checker(self, image, device, dtype):
506
+ if self.safety_checker is not None:
507
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
508
+ image, has_nsfw_concept = self.safety_checker(
509
+ images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
510
+ )
511
+ else:
512
+ has_nsfw_concept = None
513
+ return image, has_nsfw_concept
514
+
515
+ def decode_latents(self, latents):
516
+ latents = 1 / self.vae.config.scaling_factor * latents
517
+ image = self.vae.decode(latents).sample
518
+ image = (image / 2 + 0.5).clamp(0, 1)
519
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
520
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
521
+ return image
522
+
523
+ def prepare_extra_step_kwargs(self, generator, eta):
524
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
525
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
526
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
527
+ # and should be between [0, 1]
528
+
529
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
530
+ extra_step_kwargs = {}
531
+ if accepts_eta:
532
+ extra_step_kwargs["eta"] = eta
533
+
534
+ # check if the scheduler accepts generator
535
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
536
+ if accepts_generator:
537
+ extra_step_kwargs["generator"] = generator
538
+ return extra_step_kwargs
539
+
540
+ def check_controlnet_conditioning_image(self, image, prompt, prompt_embeds):
541
+ image_is_pil = isinstance(image, PIL.Image.Image)
542
+ image_is_tensor = isinstance(image, torch.Tensor)
543
+ image_is_pil_list = isinstance(image, list) and isinstance(image[0], PIL.Image.Image)
544
+ image_is_tensor_list = isinstance(image, list) and isinstance(image[0], torch.Tensor)
545
+
546
+ if not image_is_pil and not image_is_tensor and not image_is_pil_list and not image_is_tensor_list:
547
+ raise TypeError(
548
+ "image must be passed and be one of PIL image, torch tensor, list of PIL images, or list of torch tensors"
549
+ )
550
+
551
+ if image_is_pil:
552
+ image_batch_size = 1
553
+ elif image_is_tensor:
554
+ image_batch_size = image.shape[0]
555
+ elif image_is_pil_list:
556
+ image_batch_size = len(image)
557
+ elif image_is_tensor_list:
558
+ image_batch_size = len(image)
559
+ else:
560
+ raise ValueError("controlnet condition image is not valid")
561
+
562
+ if prompt is not None and isinstance(prompt, str):
563
+ prompt_batch_size = 1
564
+ elif prompt is not None and isinstance(prompt, list):
565
+ prompt_batch_size = len(prompt)
566
+ elif prompt_embeds is not None:
567
+ prompt_batch_size = prompt_embeds.shape[0]
568
+ else:
569
+ raise ValueError("prompt or prompt_embeds are not valid")
570
+
571
+ if image_batch_size != 1 and image_batch_size != prompt_batch_size:
572
+ raise ValueError(
573
+ f"If image batch size is not 1, image batch size must be same as prompt batch size. image batch size: {image_batch_size}, prompt batch size: {prompt_batch_size}"
574
+ )
575
+
576
+ def check_inputs(
577
+ self,
578
+ prompt,
579
+ image,
580
+ mask_image,
581
+ controlnet_conditioning_image,
582
+ height,
583
+ width,
584
+ callback_steps,
585
+ negative_prompt=None,
586
+ prompt_embeds=None,
587
+ negative_prompt_embeds=None,
588
+ controlnet_conditioning_scale=None,
589
+ ):
590
+ if height % 8 != 0 or width % 8 != 0:
591
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
592
+
593
+ if (callback_steps is None) or (
594
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
595
+ ):
596
+ raise ValueError(
597
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
598
+ f" {type(callback_steps)}."
599
+ )
600
+
601
+ if prompt is not None and prompt_embeds is not None:
602
+ raise ValueError(
603
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
604
+ " only forward one of the two."
605
+ )
606
+ elif prompt is None and prompt_embeds is None:
607
+ raise ValueError(
608
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
609
+ )
610
+ elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
611
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
612
+
613
+ if negative_prompt is not None and negative_prompt_embeds is not None:
614
+ raise ValueError(
615
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
616
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
617
+ )
618
+
619
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
620
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
621
+ raise ValueError(
622
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
623
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
624
+ f" {negative_prompt_embeds.shape}."
625
+ )
626
+
627
+ # check controlnet condition image
628
+ if isinstance(self.controlnet, ControlNetModel):
629
+ self.check_controlnet_conditioning_image(controlnet_conditioning_image, prompt, prompt_embeds)
630
+ elif isinstance(self.controlnet, MultiControlNetModel):
631
+ if not isinstance(controlnet_conditioning_image, list):
632
+ raise TypeError("For multiple controlnets: `image` must be type `list`")
633
+ if len(controlnet_conditioning_image) != len(self.controlnet.nets):
634
+ raise ValueError(
635
+ "For multiple controlnets: `image` must have the same length as the number of controlnets."
636
+ )
637
+ for image_ in controlnet_conditioning_image:
638
+ self.check_controlnet_conditioning_image(image_, prompt, prompt_embeds)
639
+ else:
640
+ assert False
641
+
642
+ # Check `controlnet_conditioning_scale`
643
+ if isinstance(self.controlnet, ControlNetModel):
644
+ if not isinstance(controlnet_conditioning_scale, float):
645
+ raise TypeError("For single controlnet: `controlnet_conditioning_scale` must be type `float`.")
646
+ elif isinstance(self.controlnet, MultiControlNetModel):
647
+ if isinstance(controlnet_conditioning_scale, list) and len(controlnet_conditioning_scale) != len(
648
+ self.controlnet.nets
649
+ ):
650
+ raise ValueError(
651
+ "For multiple controlnets: When `controlnet_conditioning_scale` is specified as `list`, it must have"
652
+ " the same length as the number of controlnets"
653
+ )
654
+ else:
655
+ assert False
656
+
657
+ if isinstance(image, torch.Tensor) and not isinstance(mask_image, torch.Tensor):
658
+ raise TypeError("if `image` is a tensor, `mask_image` must also be a tensor")
659
+
660
+ if isinstance(image, PIL.Image.Image) and not isinstance(mask_image, PIL.Image.Image):
661
+ raise TypeError("if `image` is a PIL image, `mask_image` must also be a PIL image")
662
+
663
+ if isinstance(image, torch.Tensor):
664
+ if image.ndim != 3 and image.ndim != 4:
665
+ raise ValueError("`image` must have 3 or 4 dimensions")
666
+
667
+ if mask_image.ndim != 2 and mask_image.ndim != 3 and mask_image.ndim != 4:
668
+ raise ValueError("`mask_image` must have 2, 3, or 4 dimensions")
669
+
670
+ if image.ndim == 3:
671
+ image_batch_size = 1
672
+ image_channels, image_height, image_width = image.shape
673
+ elif image.ndim == 4:
674
+ image_batch_size, image_channels, image_height, image_width = image.shape
675
+ else:
676
+ assert False
677
+
678
+ if mask_image.ndim == 2:
679
+ mask_image_batch_size = 1
680
+ mask_image_channels = 1
681
+ mask_image_height, mask_image_width = mask_image.shape
682
+ elif mask_image.ndim == 3:
683
+ mask_image_channels = 1
684
+ mask_image_batch_size, mask_image_height, mask_image_width = mask_image.shape
685
+ elif mask_image.ndim == 4:
686
+ mask_image_batch_size, mask_image_channels, mask_image_height, mask_image_width = mask_image.shape
687
+
688
+ if image_channels != 3:
689
+ raise ValueError("`image` must have 3 channels")
690
+
691
+ if mask_image_channels != 1:
692
+ raise ValueError("`mask_image` must have 1 channel")
693
+
694
+ if image_batch_size != mask_image_batch_size:
695
+ raise ValueError("`image` and `mask_image` mush have the same batch sizes")
696
+
697
+ if image_height != mask_image_height or image_width != mask_image_width:
698
+ raise ValueError("`image` and `mask_image` must have the same height and width dimensions")
699
+
700
+ if image.min() < -1 or image.max() > 1:
701
+ raise ValueError("`image` should be in range [-1, 1]")
702
+
703
+ if mask_image.min() < 0 or mask_image.max() > 1:
704
+ raise ValueError("`mask_image` should be in range [0, 1]")
705
+ else:
706
+ mask_image_channels = 1
707
+ image_channels = 3
708
+
709
+ single_image_latent_channels = self.vae.config.latent_channels
710
+
711
+ total_latent_channels = single_image_latent_channels * 2 + mask_image_channels
712
+
713
+ if total_latent_channels != self.unet.config.in_channels:
714
+ raise ValueError(
715
+ f"The config of `pipeline.unet` expects {self.unet.config.in_channels} but received"
716
+ f" non inpainting latent channels: {single_image_latent_channels},"
717
+ f" mask channels: {mask_image_channels}, and masked image channels: {single_image_latent_channels}."
718
+ f" Please verify the config of `pipeline.unet` and the `mask_image` and `image` inputs."
719
+ )
720
+
721
+ def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
722
+ shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
723
+ if isinstance(generator, list) and len(generator) != batch_size:
724
+ raise ValueError(
725
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
726
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
727
+ )
728
+
729
+ if latents is None:
730
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
731
+ else:
732
+ latents = latents.to(device)
733
+
734
+ # scale the initial noise by the standard deviation required by the scheduler
735
+ latents = latents * self.scheduler.init_noise_sigma
736
+
737
+ return latents
738
+
739
+ def prepare_mask_latents(self, mask_image, batch_size, height, width, dtype, device, do_classifier_free_guidance):
740
+ # resize the mask to latents shape as we concatenate the mask to the latents
741
+ # we do that before converting to dtype to avoid breaking in case we're using cpu_offload
742
+ # and half precision
743
+ mask_image = F.interpolate(mask_image, size=(height // self.vae_scale_factor, width // self.vae_scale_factor))
744
+ mask_image = mask_image.to(device=device, dtype=dtype)
745
+
746
+ # duplicate mask for each generation per prompt, using mps friendly method
747
+ if mask_image.shape[0] < batch_size:
748
+ if not batch_size % mask_image.shape[0] == 0:
749
+ raise ValueError(
750
+ "The passed mask and the required batch size don't match. Masks are supposed to be duplicated to"
751
+ f" a total batch size of {batch_size}, but {mask_image.shape[0]} masks were passed. Make sure the number"
752
+ " of masks that you pass is divisible by the total requested batch size."
753
+ )
754
+ mask_image = mask_image.repeat(batch_size // mask_image.shape[0], 1, 1, 1)
755
+
756
+ mask_image = torch.cat([mask_image] * 2) if do_classifier_free_guidance else mask_image
757
+
758
+ mask_image_latents = mask_image
759
+
760
+ return mask_image_latents
761
+
762
+ def prepare_masked_image_latents(
763
+ self, masked_image, batch_size, height, width, dtype, device, generator, do_classifier_free_guidance
764
+ ):
765
+ masked_image = masked_image.to(device=device, dtype=dtype)
766
+
767
+ # encode the mask image into latents space so we can concatenate it to the latents
768
+ if isinstance(generator, list):
769
+ masked_image_latents = [
770
+ self.vae.encode(masked_image[i : i + 1]).latent_dist.sample(generator=generator[i])
771
+ for i in range(batch_size)
772
+ ]
773
+ masked_image_latents = torch.cat(masked_image_latents, dim=0)
774
+ else:
775
+ masked_image_latents = self.vae.encode(masked_image).latent_dist.sample(generator=generator)
776
+ masked_image_latents = self.vae.config.scaling_factor * masked_image_latents
777
+
778
+ # duplicate masked_image_latents for each generation per prompt, using mps friendly method
779
+ if masked_image_latents.shape[0] < batch_size:
780
+ if not batch_size % masked_image_latents.shape[0] == 0:
781
+ raise ValueError(
782
+ "The passed images and the required batch size don't match. Images are supposed to be duplicated"
783
+ f" to a total batch size of {batch_size}, but {masked_image_latents.shape[0]} images were passed."
784
+ " Make sure the number of images that you pass is divisible by the total requested batch size."
785
+ )
786
+ masked_image_latents = masked_image_latents.repeat(batch_size // masked_image_latents.shape[0], 1, 1, 1)
787
+
788
+ masked_image_latents = (
789
+ torch.cat([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents
790
+ )
791
+
792
+ # aligning device to prevent device errors when concating it with the latent model input
793
+ masked_image_latents = masked_image_latents.to(device=device, dtype=dtype)
794
+ return masked_image_latents
795
+
796
+ def _default_height_width(self, height, width, image):
797
+ if isinstance(image, list):
798
+ image = image[0]
799
+
800
+ if height is None:
801
+ if isinstance(image, PIL.Image.Image):
802
+ height = image.height
803
+ elif isinstance(image, torch.Tensor):
804
+ height = image.shape[3]
805
+
806
+ height = (height // 8) * 8 # round down to nearest multiple of 8
807
+
808
+ if width is None:
809
+ if isinstance(image, PIL.Image.Image):
810
+ width = image.width
811
+ elif isinstance(image, torch.Tensor):
812
+ width = image.shape[2]
813
+
814
+ width = (width // 8) * 8 # round down to nearest multiple of 8
815
+
816
+ return height, width
817
+
818
+ @torch.no_grad()
819
+ @replace_example_docstring(EXAMPLE_DOC_STRING)
820
+ def __call__(
821
+ self,
822
+ prompt: Union[str, List[str]] = None,
823
+ image: Union[torch.Tensor, PIL.Image.Image] = None,
824
+ mask_image: Union[torch.Tensor, PIL.Image.Image] = None,
825
+ controlnet_conditioning_image: Union[
826
+ torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image]
827
+ ] = None,
828
+ height: Optional[int] = None,
829
+ width: Optional[int] = None,
830
+ num_inference_steps: int = 50,
831
+ guidance_scale: float = 7.5,
832
+ negative_prompt: Optional[Union[str, List[str]]] = None,
833
+ num_images_per_prompt: Optional[int] = 1,
834
+ eta: float = 0.0,
835
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
836
+ latents: Optional[torch.FloatTensor] = None,
837
+ prompt_embeds: Optional[torch.FloatTensor] = None,
838
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
839
+ output_type: Optional[str] = "pil",
840
+ return_dict: bool = True,
841
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
842
+ callback_steps: int = 1,
843
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
844
+ controlnet_conditioning_scale: Union[float, List[float]] = 1.0,
845
+ ):
846
+ r"""
847
+ Function invoked when calling the pipeline for generation.
848
+
849
+ Args:
850
+ prompt (`str` or `List[str]`, *optional*):
851
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
852
+ instead.
853
+ image (`torch.Tensor` or `PIL.Image.Image`):
854
+ `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
855
+ be masked out with `mask_image` and repainted according to `prompt`.
856
+ mask_image (`torch.Tensor` or `PIL.Image.Image`):
857
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
858
+ repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
859
+ to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
860
+ instead of 3, so the expected shape would be `(B, H, W, 1)`.
861
+ controlnet_conditioning_image (`torch.FloatTensor`, `PIL.Image.Image`, `List[torch.FloatTensor]` or `List[PIL.Image.Image]`):
862
+ The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If
863
+ the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. PIL.Image.Image` can
864
+ also be accepted as an image. The control image is automatically resized to fit the output image.
865
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
866
+ The height in pixels of the generated image.
867
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
868
+ The width in pixels of the generated image.
869
+ num_inference_steps (`int`, *optional*, defaults to 50):
870
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
871
+ expense of slower inference.
872
+ guidance_scale (`float`, *optional*, defaults to 7.5):
873
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
874
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
875
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
876
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
877
+ usually at the expense of lower image quality.
878
+ negative_prompt (`str` or `List[str]`, *optional*):
879
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead.
880
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
881
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
882
+ The number of images to generate per prompt.
883
+ eta (`float`, *optional*, defaults to 0.0):
884
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
885
+ [`schedulers.DDIMScheduler`], will be ignored for others.
886
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
887
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
888
+ to make generation deterministic.
889
+ latents (`torch.FloatTensor`, *optional*):
890
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
891
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
892
+ tensor will ge generated by sampling using the supplied random `generator`.
893
+ prompt_embeds (`torch.FloatTensor`, *optional*):
894
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
895
+ provided, text embeddings will be generated from `prompt` input argument.
896
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
897
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
898
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
899
+ argument.
900
+ output_type (`str`, *optional*, defaults to `"pil"`):
901
+ The output format of the generate image. Choose between
902
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
903
+ return_dict (`bool`, *optional*, defaults to `True`):
904
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
905
+ plain tuple.
906
+ callback (`Callable`, *optional*):
907
+ A function that will be called every `callback_steps` steps during inference. The function will be
908
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
909
+ callback_steps (`int`, *optional*, defaults to 1):
910
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
911
+ called at every step.
912
+ cross_attention_kwargs (`dict`, *optional*):
913
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
914
+ `self.processor` in
915
+ [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
916
+ controlnet_conditioning_scale (`float`, *optional*, defaults to 1.0):
917
+ The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added
918
+ to the residual in the original unet.
919
+
920
+ Examples:
921
+
922
+ Returns:
923
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
924
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
925
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
926
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
927
+ (nsfw) content, according to the `safety_checker`.
928
+ """
929
+ # 0. Default height and width to unet
930
+ height, width = self._default_height_width(height, width, controlnet_conditioning_image)
931
+
932
+ # 1. Check inputs. Raise error if not correct
933
+ self.check_inputs(
934
+ prompt,
935
+ image,
936
+ mask_image,
937
+ controlnet_conditioning_image,
938
+ height,
939
+ width,
940
+ callback_steps,
941
+ negative_prompt,
942
+ prompt_embeds,
943
+ negative_prompt_embeds,
944
+ controlnet_conditioning_scale,
945
+ )
946
+
947
+ # 2. Define call parameters
948
+ if prompt is not None and isinstance(prompt, str):
949
+ batch_size = 1
950
+ elif prompt is not None and isinstance(prompt, list):
951
+ batch_size = len(prompt)
952
+ else:
953
+ batch_size = prompt_embeds.shape[0]
954
+
955
+ device = self._execution_device
956
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
957
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
958
+ # corresponds to doing no classifier free guidance.
959
+ do_classifier_free_guidance = guidance_scale > 1.0
960
+
961
+ if isinstance(self.controlnet, MultiControlNetModel) and isinstance(controlnet_conditioning_scale, float):
962
+ controlnet_conditioning_scale = [controlnet_conditioning_scale] * len(self.controlnet.nets)
963
+
964
+ # 3. Encode input prompt
965
+ prompt_embeds = self._encode_prompt(
966
+ prompt,
967
+ device,
968
+ num_images_per_prompt,
969
+ do_classifier_free_guidance,
970
+ negative_prompt,
971
+ prompt_embeds=prompt_embeds,
972
+ negative_prompt_embeds=negative_prompt_embeds,
973
+ )
974
+
975
+ # 4. Prepare mask, image, and controlnet_conditioning_image
976
+ image = prepare_image(image)
977
+
978
+ mask_image = prepare_mask_image(mask_image)
979
+
980
+ # condition image(s)
981
+ if isinstance(self.controlnet, ControlNetModel):
982
+ controlnet_conditioning_image = prepare_controlnet_conditioning_image(
983
+ controlnet_conditioning_image=controlnet_conditioning_image,
984
+ width=width,
985
+ height=height,
986
+ batch_size=batch_size * num_images_per_prompt,
987
+ num_images_per_prompt=num_images_per_prompt,
988
+ device=device,
989
+ dtype=self.controlnet.dtype,
990
+ do_classifier_free_guidance=do_classifier_free_guidance,
991
+ )
992
+ elif isinstance(self.controlnet, MultiControlNetModel):
993
+ controlnet_conditioning_images = []
994
+
995
+ for image_ in controlnet_conditioning_image:
996
+ image_ = prepare_controlnet_conditioning_image(
997
+ controlnet_conditioning_image=image_,
998
+ width=width,
999
+ height=height,
1000
+ batch_size=batch_size * num_images_per_prompt,
1001
+ num_images_per_prompt=num_images_per_prompt,
1002
+ device=device,
1003
+ dtype=self.controlnet.dtype,
1004
+ do_classifier_free_guidance=do_classifier_free_guidance,
1005
+ )
1006
+ controlnet_conditioning_images.append(image_)
1007
+
1008
+ controlnet_conditioning_image = controlnet_conditioning_images
1009
+ else:
1010
+ assert False
1011
+
1012
+ masked_image = image * (mask_image < 0.5)
1013
+
1014
+ # 5. Prepare timesteps
1015
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
1016
+ timesteps = self.scheduler.timesteps
1017
+
1018
+ # 6. Prepare latent variables
1019
+ num_channels_latents = self.vae.config.latent_channels
1020
+ latents = self.prepare_latents(
1021
+ batch_size * num_images_per_prompt,
1022
+ num_channels_latents,
1023
+ height,
1024
+ width,
1025
+ prompt_embeds.dtype,
1026
+ device,
1027
+ generator,
1028
+ latents,
1029
+ )
1030
+
1031
+ mask_image_latents = self.prepare_mask_latents(
1032
+ mask_image,
1033
+ batch_size * num_images_per_prompt,
1034
+ height,
1035
+ width,
1036
+ prompt_embeds.dtype,
1037
+ device,
1038
+ do_classifier_free_guidance,
1039
+ )
1040
+
1041
+ masked_image_latents = self.prepare_masked_image_latents(
1042
+ masked_image,
1043
+ batch_size * num_images_per_prompt,
1044
+ height,
1045
+ width,
1046
+ prompt_embeds.dtype,
1047
+ device,
1048
+ generator,
1049
+ do_classifier_free_guidance,
1050
+ )
1051
+
1052
+ # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
1053
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
1054
+
1055
+ # 8. Denoising loop
1056
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
1057
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
1058
+ for i, t in enumerate(timesteps):
1059
+ # expand the latents if we are doing classifier free guidance
1060
+ non_inpainting_latent_model_input = (
1061
+ torch.cat([latents] * 2) if do_classifier_free_guidance else latents
1062
+ )
1063
+
1064
+ non_inpainting_latent_model_input = self.scheduler.scale_model_input(
1065
+ non_inpainting_latent_model_input, t
1066
+ )
1067
+
1068
+ inpainting_latent_model_input = torch.cat(
1069
+ [non_inpainting_latent_model_input, mask_image_latents, masked_image_latents], dim=1
1070
+ )
1071
+
1072
+ down_block_res_samples, mid_block_res_sample = self.controlnet(
1073
+ non_inpainting_latent_model_input,
1074
+ t,
1075
+ encoder_hidden_states=prompt_embeds,
1076
+ controlnet_cond=controlnet_conditioning_image,
1077
+ conditioning_scale=controlnet_conditioning_scale,
1078
+ return_dict=False,
1079
+ )
1080
+
1081
+ # predict the noise residual
1082
+ noise_pred = self.unet(
1083
+ inpainting_latent_model_input,
1084
+ t,
1085
+ encoder_hidden_states=prompt_embeds,
1086
+ cross_attention_kwargs=cross_attention_kwargs,
1087
+ down_block_additional_residuals=down_block_res_samples,
1088
+ mid_block_additional_residual=mid_block_res_sample,
1089
+ ).sample
1090
+
1091
+ # perform guidance
1092
+ if do_classifier_free_guidance:
1093
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
1094
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
1095
+
1096
+ # compute the previous noisy sample x_t -> x_t-1
1097
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
1098
+
1099
+ # call the callback, if provided
1100
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
1101
+ progress_bar.update()
1102
+ if callback is not None and i % callback_steps == 0:
1103
+ callback(i, t, latents)
1104
+
1105
+ # If we do sequential model offloading, let's offload unet and controlnet
1106
+ # manually for max memory savings
1107
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
1108
+ self.unet.to("cpu")
1109
+ self.controlnet.to("cpu")
1110
+ torch.cuda.empty_cache()
1111
+
1112
+ if output_type == "latent":
1113
+ image = latents
1114
+ has_nsfw_concept = None
1115
+ elif output_type == "pil":
1116
+ # 8. Post-processing
1117
+ image = self.decode_latents(latents)
1118
+
1119
+ # 9. Run safety checker
1120
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
1121
+
1122
+ # 10. Convert to PIL
1123
+ image = self.numpy_to_pil(image)
1124
+ else:
1125
+ # 8. Post-processing
1126
+ image = self.decode_latents(latents)
1127
+
1128
+ # 9. Run safety checker
1129
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
1130
+
1131
+ # Offload last model to CPU
1132
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
1133
+ self.final_offload_hook.offload()
1134
+
1135
+ if not return_dict:
1136
+ return (image, has_nsfw_concept)
1137
+
1138
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/stable_diffusion_controlnet_inpaint_img2img.py ADDED
@@ -0,0 +1,1119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inspired by: https://github.com/haofanwang/ControlNet-for-Diffusers/
2
+
3
+ import inspect
4
+ from typing import Any, Callable, Dict, List, Optional, Union
5
+
6
+ import numpy as np
7
+ import PIL.Image
8
+ import torch
9
+ import torch.nn.functional as F
10
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
11
+
12
+ from diffusers import AutoencoderKL, ControlNetModel, DiffusionPipeline, UNet2DConditionModel, logging
13
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput, StableDiffusionSafetyChecker
14
+ from diffusers.schedulers import KarrasDiffusionSchedulers
15
+ from diffusers.utils import (
16
+ PIL_INTERPOLATION,
17
+ is_accelerate_available,
18
+ is_accelerate_version,
19
+ randn_tensor,
20
+ replace_example_docstring,
21
+ )
22
+
23
+
24
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
25
+
26
+ EXAMPLE_DOC_STRING = """
27
+ Examples:
28
+ ```py
29
+ >>> import numpy as np
30
+ >>> import torch
31
+ >>> from PIL import Image
32
+ >>> from stable_diffusion_controlnet_inpaint_img2img import StableDiffusionControlNetInpaintImg2ImgPipeline
33
+
34
+ >>> from transformers import AutoImageProcessor, UperNetForSemanticSegmentation
35
+ >>> from diffusers import ControlNetModel, UniPCMultistepScheduler
36
+ >>> from diffusers.utils import load_image
37
+
38
+ >>> def ade_palette():
39
+ return [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50],
40
+ [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255],
41
+ [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7],
42
+ [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82],
43
+ [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3],
44
+ [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255],
45
+ [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220],
46
+ [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224],
47
+ [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255],
48
+ [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7],
49
+ [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153],
50
+ [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255],
51
+ [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0],
52
+ [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255],
53
+ [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255],
54
+ [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255],
55
+ [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0],
56
+ [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0],
57
+ [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255],
58
+ [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255],
59
+ [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20],
60
+ [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255],
61
+ [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255],
62
+ [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255],
63
+ [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0],
64
+ [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0],
65
+ [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255],
66
+ [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112],
67
+ [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160],
68
+ [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163],
69
+ [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0],
70
+ [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0],
71
+ [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255],
72
+ [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204],
73
+ [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255],
74
+ [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255],
75
+ [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194],
76
+ [102, 255, 0], [92, 0, 255]]
77
+
78
+ >>> image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-small")
79
+ >>> image_segmentor = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-convnext-small")
80
+
81
+ >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-seg", torch_dtype=torch.float16)
82
+
83
+ >>> pipe = StableDiffusionControlNetInpaintImg2ImgPipeline.from_pretrained(
84
+ "runwayml/stable-diffusion-inpainting", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
85
+ )
86
+
87
+ >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
88
+ >>> pipe.enable_xformers_memory_efficient_attention()
89
+ >>> pipe.enable_model_cpu_offload()
90
+
91
+ >>> def image_to_seg(image):
92
+ pixel_values = image_processor(image, return_tensors="pt").pixel_values
93
+ with torch.no_grad():
94
+ outputs = image_segmentor(pixel_values)
95
+ seg = image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
96
+ color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) # height, width, 3
97
+ palette = np.array(ade_palette())
98
+ for label, color in enumerate(palette):
99
+ color_seg[seg == label, :] = color
100
+ color_seg = color_seg.astype(np.uint8)
101
+ seg_image = Image.fromarray(color_seg)
102
+ return seg_image
103
+
104
+ >>> image = load_image(
105
+ "https://github.com/CompVis/latent-diffusion/raw/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
106
+ )
107
+
108
+ >>> mask_image = load_image(
109
+ "https://github.com/CompVis/latent-diffusion/raw/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
110
+ )
111
+
112
+ >>> controlnet_conditioning_image = image_to_seg(image)
113
+
114
+ >>> image = pipe(
115
+ "Face of a yellow cat, high resolution, sitting on a park bench",
116
+ image,
117
+ mask_image,
118
+ controlnet_conditioning_image,
119
+ num_inference_steps=20,
120
+ ).images[0]
121
+
122
+ >>> image.save("out.png")
123
+ ```
124
+ """
125
+
126
+
127
+ def prepare_image(image):
128
+ if isinstance(image, torch.Tensor):
129
+ # Batch single image
130
+ if image.ndim == 3:
131
+ image = image.unsqueeze(0)
132
+
133
+ image = image.to(dtype=torch.float32)
134
+ else:
135
+ # preprocess image
136
+ if isinstance(image, (PIL.Image.Image, np.ndarray)):
137
+ image = [image]
138
+
139
+ if isinstance(image, list) and isinstance(image[0], PIL.Image.Image):
140
+ image = [np.array(i.convert("RGB"))[None, :] for i in image]
141
+ image = np.concatenate(image, axis=0)
142
+ elif isinstance(image, list) and isinstance(image[0], np.ndarray):
143
+ image = np.concatenate([i[None, :] for i in image], axis=0)
144
+
145
+ image = image.transpose(0, 3, 1, 2)
146
+ image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0
147
+
148
+ return image
149
+
150
+
151
+ def prepare_mask_image(mask_image):
152
+ if isinstance(mask_image, torch.Tensor):
153
+ if mask_image.ndim == 2:
154
+ # Batch and add channel dim for single mask
155
+ mask_image = mask_image.unsqueeze(0).unsqueeze(0)
156
+ elif mask_image.ndim == 3 and mask_image.shape[0] == 1:
157
+ # Single mask, the 0'th dimension is considered to be
158
+ # the existing batch size of 1
159
+ mask_image = mask_image.unsqueeze(0)
160
+ elif mask_image.ndim == 3 and mask_image.shape[0] != 1:
161
+ # Batch of mask, the 0'th dimension is considered to be
162
+ # the batching dimension
163
+ mask_image = mask_image.unsqueeze(1)
164
+
165
+ # Binarize mask
166
+ mask_image[mask_image < 0.5] = 0
167
+ mask_image[mask_image >= 0.5] = 1
168
+ else:
169
+ # preprocess mask
170
+ if isinstance(mask_image, (PIL.Image.Image, np.ndarray)):
171
+ mask_image = [mask_image]
172
+
173
+ if isinstance(mask_image, list) and isinstance(mask_image[0], PIL.Image.Image):
174
+ mask_image = np.concatenate([np.array(m.convert("L"))[None, None, :] for m in mask_image], axis=0)
175
+ mask_image = mask_image.astype(np.float32) / 255.0
176
+ elif isinstance(mask_image, list) and isinstance(mask_image[0], np.ndarray):
177
+ mask_image = np.concatenate([m[None, None, :] for m in mask_image], axis=0)
178
+
179
+ mask_image[mask_image < 0.5] = 0
180
+ mask_image[mask_image >= 0.5] = 1
181
+ mask_image = torch.from_numpy(mask_image)
182
+
183
+ return mask_image
184
+
185
+
186
+ def prepare_controlnet_conditioning_image(
187
+ controlnet_conditioning_image, width, height, batch_size, num_images_per_prompt, device, dtype
188
+ ):
189
+ if not isinstance(controlnet_conditioning_image, torch.Tensor):
190
+ if isinstance(controlnet_conditioning_image, PIL.Image.Image):
191
+ controlnet_conditioning_image = [controlnet_conditioning_image]
192
+
193
+ if isinstance(controlnet_conditioning_image[0], PIL.Image.Image):
194
+ controlnet_conditioning_image = [
195
+ np.array(i.resize((width, height), resample=PIL_INTERPOLATION["lanczos"]))[None, :]
196
+ for i in controlnet_conditioning_image
197
+ ]
198
+ controlnet_conditioning_image = np.concatenate(controlnet_conditioning_image, axis=0)
199
+ controlnet_conditioning_image = np.array(controlnet_conditioning_image).astype(np.float32) / 255.0
200
+ controlnet_conditioning_image = controlnet_conditioning_image.transpose(0, 3, 1, 2)
201
+ controlnet_conditioning_image = torch.from_numpy(controlnet_conditioning_image)
202
+ elif isinstance(controlnet_conditioning_image[0], torch.Tensor):
203
+ controlnet_conditioning_image = torch.cat(controlnet_conditioning_image, dim=0)
204
+
205
+ image_batch_size = controlnet_conditioning_image.shape[0]
206
+
207
+ if image_batch_size == 1:
208
+ repeat_by = batch_size
209
+ else:
210
+ # image batch size is the same as prompt batch size
211
+ repeat_by = num_images_per_prompt
212
+
213
+ controlnet_conditioning_image = controlnet_conditioning_image.repeat_interleave(repeat_by, dim=0)
214
+
215
+ controlnet_conditioning_image = controlnet_conditioning_image.to(device=device, dtype=dtype)
216
+
217
+ return controlnet_conditioning_image
218
+
219
+
220
+ class StableDiffusionControlNetInpaintImg2ImgPipeline(DiffusionPipeline):
221
+ """
222
+ Inspired by: https://github.com/haofanwang/ControlNet-for-Diffusers/
223
+ """
224
+
225
+ _optional_components = ["safety_checker", "feature_extractor"]
226
+
227
+ def __init__(
228
+ self,
229
+ vae: AutoencoderKL,
230
+ text_encoder: CLIPTextModel,
231
+ tokenizer: CLIPTokenizer,
232
+ unet: UNet2DConditionModel,
233
+ controlnet: ControlNetModel,
234
+ scheduler: KarrasDiffusionSchedulers,
235
+ safety_checker: StableDiffusionSafetyChecker,
236
+ feature_extractor: CLIPImageProcessor,
237
+ requires_safety_checker: bool = True,
238
+ ):
239
+ super().__init__()
240
+
241
+ if safety_checker is None and requires_safety_checker:
242
+ logger.warning(
243
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
244
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
245
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
246
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
247
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
248
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
249
+ )
250
+
251
+ if safety_checker is not None and feature_extractor is None:
252
+ raise ValueError(
253
+ "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
254
+ " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
255
+ )
256
+
257
+ self.register_modules(
258
+ vae=vae,
259
+ text_encoder=text_encoder,
260
+ tokenizer=tokenizer,
261
+ unet=unet,
262
+ controlnet=controlnet,
263
+ scheduler=scheduler,
264
+ safety_checker=safety_checker,
265
+ feature_extractor=feature_extractor,
266
+ )
267
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
268
+ self.register_to_config(requires_safety_checker=requires_safety_checker)
269
+
270
+ def enable_vae_slicing(self):
271
+ r"""
272
+ Enable sliced VAE decoding.
273
+
274
+ When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several
275
+ steps. This is useful to save some memory and allow larger batch sizes.
276
+ """
277
+ self.vae.enable_slicing()
278
+
279
+ def disable_vae_slicing(self):
280
+ r"""
281
+ Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to
282
+ computing decoding in one step.
283
+ """
284
+ self.vae.disable_slicing()
285
+
286
+ def enable_sequential_cpu_offload(self, gpu_id=0):
287
+ r"""
288
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
289
+ text_encoder, vae, controlnet, and safety checker have their state dicts saved to CPU and then are moved to a
290
+ `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
291
+ Note that offloading happens on a submodule basis. Memory savings are higher than with
292
+ `enable_model_cpu_offload`, but performance is lower.
293
+ """
294
+ if is_accelerate_available():
295
+ from accelerate import cpu_offload
296
+ else:
297
+ raise ImportError("Please install accelerate via `pip install accelerate`")
298
+
299
+ device = torch.device(f"cuda:{gpu_id}")
300
+
301
+ for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.controlnet]:
302
+ cpu_offload(cpu_offloaded_model, device)
303
+
304
+ if self.safety_checker is not None:
305
+ cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True)
306
+
307
+ def enable_model_cpu_offload(self, gpu_id=0):
308
+ r"""
309
+ Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
310
+ to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
311
+ method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
312
+ `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
313
+ """
314
+ if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
315
+ from accelerate import cpu_offload_with_hook
316
+ else:
317
+ raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
318
+
319
+ device = torch.device(f"cuda:{gpu_id}")
320
+
321
+ hook = None
322
+ for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]:
323
+ _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
324
+
325
+ if self.safety_checker is not None:
326
+ # the safety checker can offload the vae again
327
+ _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
328
+
329
+ # control net hook has be manually offloaded as it alternates with unet
330
+ cpu_offload_with_hook(self.controlnet, device)
331
+
332
+ # We'll offload the last model manually.
333
+ self.final_offload_hook = hook
334
+
335
+ @property
336
+ def _execution_device(self):
337
+ r"""
338
+ Returns the device on which the pipeline's models will be executed. After calling
339
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
340
+ hooks.
341
+ """
342
+ if not hasattr(self.unet, "_hf_hook"):
343
+ return self.device
344
+ for module in self.unet.modules():
345
+ if (
346
+ hasattr(module, "_hf_hook")
347
+ and hasattr(module._hf_hook, "execution_device")
348
+ and module._hf_hook.execution_device is not None
349
+ ):
350
+ return torch.device(module._hf_hook.execution_device)
351
+ return self.device
352
+
353
+ def _encode_prompt(
354
+ self,
355
+ prompt,
356
+ device,
357
+ num_images_per_prompt,
358
+ do_classifier_free_guidance,
359
+ negative_prompt=None,
360
+ prompt_embeds: Optional[torch.FloatTensor] = None,
361
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
362
+ ):
363
+ r"""
364
+ Encodes the prompt into text encoder hidden states.
365
+
366
+ Args:
367
+ prompt (`str` or `List[str]`, *optional*):
368
+ prompt to be encoded
369
+ device: (`torch.device`):
370
+ torch device
371
+ num_images_per_prompt (`int`):
372
+ number of images that should be generated per prompt
373
+ do_classifier_free_guidance (`bool`):
374
+ whether to use classifier free guidance or not
375
+ negative_prompt (`str` or `List[str]`, *optional*):
376
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead.
377
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
378
+ prompt_embeds (`torch.FloatTensor`, *optional*):
379
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
380
+ provided, text embeddings will be generated from `prompt` input argument.
381
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
382
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
383
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
384
+ argument.
385
+ """
386
+ if prompt is not None and isinstance(prompt, str):
387
+ batch_size = 1
388
+ elif prompt is not None and isinstance(prompt, list):
389
+ batch_size = len(prompt)
390
+ else:
391
+ batch_size = prompt_embeds.shape[0]
392
+
393
+ if prompt_embeds is None:
394
+ text_inputs = self.tokenizer(
395
+ prompt,
396
+ padding="max_length",
397
+ max_length=self.tokenizer.model_max_length,
398
+ truncation=True,
399
+ return_tensors="pt",
400
+ )
401
+ text_input_ids = text_inputs.input_ids
402
+ untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
403
+
404
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
405
+ text_input_ids, untruncated_ids
406
+ ):
407
+ removed_text = self.tokenizer.batch_decode(
408
+ untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
409
+ )
410
+ logger.warning(
411
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
412
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
413
+ )
414
+
415
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
416
+ attention_mask = text_inputs.attention_mask.to(device)
417
+ else:
418
+ attention_mask = None
419
+
420
+ prompt_embeds = self.text_encoder(
421
+ text_input_ids.to(device),
422
+ attention_mask=attention_mask,
423
+ )
424
+ prompt_embeds = prompt_embeds[0]
425
+
426
+ prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
427
+
428
+ bs_embed, seq_len, _ = prompt_embeds.shape
429
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
430
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
431
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
432
+
433
+ # get unconditional embeddings for classifier free guidance
434
+ if do_classifier_free_guidance and negative_prompt_embeds is None:
435
+ uncond_tokens: List[str]
436
+ if negative_prompt is None:
437
+ uncond_tokens = [""] * batch_size
438
+ elif type(prompt) is not type(negative_prompt):
439
+ raise TypeError(
440
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
441
+ f" {type(prompt)}."
442
+ )
443
+ elif isinstance(negative_prompt, str):
444
+ uncond_tokens = [negative_prompt]
445
+ elif batch_size != len(negative_prompt):
446
+ raise ValueError(
447
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
448
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
449
+ " the batch size of `prompt`."
450
+ )
451
+ else:
452
+ uncond_tokens = negative_prompt
453
+
454
+ max_length = prompt_embeds.shape[1]
455
+ uncond_input = self.tokenizer(
456
+ uncond_tokens,
457
+ padding="max_length",
458
+ max_length=max_length,
459
+ truncation=True,
460
+ return_tensors="pt",
461
+ )
462
+
463
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
464
+ attention_mask = uncond_input.attention_mask.to(device)
465
+ else:
466
+ attention_mask = None
467
+
468
+ negative_prompt_embeds = self.text_encoder(
469
+ uncond_input.input_ids.to(device),
470
+ attention_mask=attention_mask,
471
+ )
472
+ negative_prompt_embeds = negative_prompt_embeds[0]
473
+
474
+ if do_classifier_free_guidance:
475
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
476
+ seq_len = negative_prompt_embeds.shape[1]
477
+
478
+ negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
479
+
480
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
481
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
482
+
483
+ # For classifier free guidance, we need to do two forward passes.
484
+ # Here we concatenate the unconditional and text embeddings into a single batch
485
+ # to avoid doing two forward passes
486
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
487
+
488
+ return prompt_embeds
489
+
490
+ def run_safety_checker(self, image, device, dtype):
491
+ if self.safety_checker is not None:
492
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
493
+ image, has_nsfw_concept = self.safety_checker(
494
+ images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
495
+ )
496
+ else:
497
+ has_nsfw_concept = None
498
+ return image, has_nsfw_concept
499
+
500
+ def decode_latents(self, latents):
501
+ latents = 1 / self.vae.config.scaling_factor * latents
502
+ image = self.vae.decode(latents).sample
503
+ image = (image / 2 + 0.5).clamp(0, 1)
504
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
505
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
506
+ return image
507
+
508
+ def prepare_extra_step_kwargs(self, generator, eta):
509
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
510
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
511
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
512
+ # and should be between [0, 1]
513
+
514
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
515
+ extra_step_kwargs = {}
516
+ if accepts_eta:
517
+ extra_step_kwargs["eta"] = eta
518
+
519
+ # check if the scheduler accepts generator
520
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
521
+ if accepts_generator:
522
+ extra_step_kwargs["generator"] = generator
523
+ return extra_step_kwargs
524
+
525
+ def check_inputs(
526
+ self,
527
+ prompt,
528
+ image,
529
+ mask_image,
530
+ controlnet_conditioning_image,
531
+ height,
532
+ width,
533
+ callback_steps,
534
+ negative_prompt=None,
535
+ prompt_embeds=None,
536
+ negative_prompt_embeds=None,
537
+ strength=None,
538
+ ):
539
+ if height % 8 != 0 or width % 8 != 0:
540
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
541
+
542
+ if (callback_steps is None) or (
543
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
544
+ ):
545
+ raise ValueError(
546
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
547
+ f" {type(callback_steps)}."
548
+ )
549
+
550
+ if prompt is not None and prompt_embeds is not None:
551
+ raise ValueError(
552
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
553
+ " only forward one of the two."
554
+ )
555
+ elif prompt is None and prompt_embeds is None:
556
+ raise ValueError(
557
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
558
+ )
559
+ elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
560
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
561
+
562
+ if negative_prompt is not None and negative_prompt_embeds is not None:
563
+ raise ValueError(
564
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
565
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
566
+ )
567
+
568
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
569
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
570
+ raise ValueError(
571
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
572
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
573
+ f" {negative_prompt_embeds.shape}."
574
+ )
575
+
576
+ controlnet_cond_image_is_pil = isinstance(controlnet_conditioning_image, PIL.Image.Image)
577
+ controlnet_cond_image_is_tensor = isinstance(controlnet_conditioning_image, torch.Tensor)
578
+ controlnet_cond_image_is_pil_list = isinstance(controlnet_conditioning_image, list) and isinstance(
579
+ controlnet_conditioning_image[0], PIL.Image.Image
580
+ )
581
+ controlnet_cond_image_is_tensor_list = isinstance(controlnet_conditioning_image, list) and isinstance(
582
+ controlnet_conditioning_image[0], torch.Tensor
583
+ )
584
+
585
+ if (
586
+ not controlnet_cond_image_is_pil
587
+ and not controlnet_cond_image_is_tensor
588
+ and not controlnet_cond_image_is_pil_list
589
+ and not controlnet_cond_image_is_tensor_list
590
+ ):
591
+ raise TypeError(
592
+ "image must be passed and be one of PIL image, torch tensor, list of PIL images, or list of torch tensors"
593
+ )
594
+
595
+ if controlnet_cond_image_is_pil:
596
+ controlnet_cond_image_batch_size = 1
597
+ elif controlnet_cond_image_is_tensor:
598
+ controlnet_cond_image_batch_size = controlnet_conditioning_image.shape[0]
599
+ elif controlnet_cond_image_is_pil_list:
600
+ controlnet_cond_image_batch_size = len(controlnet_conditioning_image)
601
+ elif controlnet_cond_image_is_tensor_list:
602
+ controlnet_cond_image_batch_size = len(controlnet_conditioning_image)
603
+
604
+ if prompt is not None and isinstance(prompt, str):
605
+ prompt_batch_size = 1
606
+ elif prompt is not None and isinstance(prompt, list):
607
+ prompt_batch_size = len(prompt)
608
+ elif prompt_embeds is not None:
609
+ prompt_batch_size = prompt_embeds.shape[0]
610
+
611
+ if controlnet_cond_image_batch_size != 1 and controlnet_cond_image_batch_size != prompt_batch_size:
612
+ raise ValueError(
613
+ f"If image batch size is not 1, image batch size must be same as prompt batch size. image batch size: {controlnet_cond_image_batch_size}, prompt batch size: {prompt_batch_size}"
614
+ )
615
+
616
+ if isinstance(image, torch.Tensor) and not isinstance(mask_image, torch.Tensor):
617
+ raise TypeError("if `image` is a tensor, `mask_image` must also be a tensor")
618
+
619
+ if isinstance(image, PIL.Image.Image) and not isinstance(mask_image, PIL.Image.Image):
620
+ raise TypeError("if `image` is a PIL image, `mask_image` must also be a PIL image")
621
+
622
+ if isinstance(image, torch.Tensor):
623
+ if image.ndim != 3 and image.ndim != 4:
624
+ raise ValueError("`image` must have 3 or 4 dimensions")
625
+
626
+ if mask_image.ndim != 2 and mask_image.ndim != 3 and mask_image.ndim != 4:
627
+ raise ValueError("`mask_image` must have 2, 3, or 4 dimensions")
628
+
629
+ if image.ndim == 3:
630
+ image_batch_size = 1
631
+ image_channels, image_height, image_width = image.shape
632
+ elif image.ndim == 4:
633
+ image_batch_size, image_channels, image_height, image_width = image.shape
634
+
635
+ if mask_image.ndim == 2:
636
+ mask_image_batch_size = 1
637
+ mask_image_channels = 1
638
+ mask_image_height, mask_image_width = mask_image.shape
639
+ elif mask_image.ndim == 3:
640
+ mask_image_channels = 1
641
+ mask_image_batch_size, mask_image_height, mask_image_width = mask_image.shape
642
+ elif mask_image.ndim == 4:
643
+ mask_image_batch_size, mask_image_channels, mask_image_height, mask_image_width = mask_image.shape
644
+
645
+ if image_channels != 3:
646
+ raise ValueError("`image` must have 3 channels")
647
+
648
+ if mask_image_channels != 1:
649
+ raise ValueError("`mask_image` must have 1 channel")
650
+
651
+ if image_batch_size != mask_image_batch_size:
652
+ raise ValueError("`image` and `mask_image` mush have the same batch sizes")
653
+
654
+ if image_height != mask_image_height or image_width != mask_image_width:
655
+ raise ValueError("`image` and `mask_image` must have the same height and width dimensions")
656
+
657
+ if image.min() < -1 or image.max() > 1:
658
+ raise ValueError("`image` should be in range [-1, 1]")
659
+
660
+ if mask_image.min() < 0 or mask_image.max() > 1:
661
+ raise ValueError("`mask_image` should be in range [0, 1]")
662
+ else:
663
+ mask_image_channels = 1
664
+ image_channels = 3
665
+
666
+ single_image_latent_channels = self.vae.config.latent_channels
667
+
668
+ total_latent_channels = single_image_latent_channels * 2 + mask_image_channels
669
+
670
+ if total_latent_channels != self.unet.config.in_channels:
671
+ raise ValueError(
672
+ f"The config of `pipeline.unet` expects {self.unet.config.in_channels} but received"
673
+ f" non inpainting latent channels: {single_image_latent_channels},"
674
+ f" mask channels: {mask_image_channels}, and masked image channels: {single_image_latent_channels}."
675
+ f" Please verify the config of `pipeline.unet` and the `mask_image` and `image` inputs."
676
+ )
677
+
678
+ if strength < 0 or strength > 1:
679
+ raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
680
+
681
+ def get_timesteps(self, num_inference_steps, strength, device):
682
+ # get the original timestep using init_timestep
683
+ init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
684
+
685
+ t_start = max(num_inference_steps - init_timestep, 0)
686
+ timesteps = self.scheduler.timesteps[t_start:]
687
+
688
+ return timesteps, num_inference_steps - t_start
689
+
690
+ def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None):
691
+ if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)):
692
+ raise ValueError(
693
+ f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}"
694
+ )
695
+
696
+ image = image.to(device=device, dtype=dtype)
697
+
698
+ batch_size = batch_size * num_images_per_prompt
699
+ if isinstance(generator, list) and len(generator) != batch_size:
700
+ raise ValueError(
701
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
702
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
703
+ )
704
+
705
+ if isinstance(generator, list):
706
+ init_latents = [
707
+ self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size)
708
+ ]
709
+ init_latents = torch.cat(init_latents, dim=0)
710
+ else:
711
+ init_latents = self.vae.encode(image).latent_dist.sample(generator)
712
+
713
+ init_latents = self.vae.config.scaling_factor * init_latents
714
+
715
+ if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0:
716
+ raise ValueError(
717
+ f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts."
718
+ )
719
+ else:
720
+ init_latents = torch.cat([init_latents], dim=0)
721
+
722
+ shape = init_latents.shape
723
+ noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
724
+
725
+ # get latents
726
+ init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
727
+ latents = init_latents
728
+
729
+ return latents
730
+
731
+ def prepare_mask_latents(self, mask_image, batch_size, height, width, dtype, device, do_classifier_free_guidance):
732
+ # resize the mask to latents shape as we concatenate the mask to the latents
733
+ # we do that before converting to dtype to avoid breaking in case we're using cpu_offload
734
+ # and half precision
735
+ mask_image = F.interpolate(mask_image, size=(height // self.vae_scale_factor, width // self.vae_scale_factor))
736
+ mask_image = mask_image.to(device=device, dtype=dtype)
737
+
738
+ # duplicate mask for each generation per prompt, using mps friendly method
739
+ if mask_image.shape[0] < batch_size:
740
+ if not batch_size % mask_image.shape[0] == 0:
741
+ raise ValueError(
742
+ "The passed mask and the required batch size don't match. Masks are supposed to be duplicated to"
743
+ f" a total batch size of {batch_size}, but {mask_image.shape[0]} masks were passed. Make sure the number"
744
+ " of masks that you pass is divisible by the total requested batch size."
745
+ )
746
+ mask_image = mask_image.repeat(batch_size // mask_image.shape[0], 1, 1, 1)
747
+
748
+ mask_image = torch.cat([mask_image] * 2) if do_classifier_free_guidance else mask_image
749
+
750
+ mask_image_latents = mask_image
751
+
752
+ return mask_image_latents
753
+
754
+ def prepare_masked_image_latents(
755
+ self, masked_image, batch_size, height, width, dtype, device, generator, do_classifier_free_guidance
756
+ ):
757
+ masked_image = masked_image.to(device=device, dtype=dtype)
758
+
759
+ # encode the mask image into latents space so we can concatenate it to the latents
760
+ if isinstance(generator, list):
761
+ masked_image_latents = [
762
+ self.vae.encode(masked_image[i : i + 1]).latent_dist.sample(generator=generator[i])
763
+ for i in range(batch_size)
764
+ ]
765
+ masked_image_latents = torch.cat(masked_image_latents, dim=0)
766
+ else:
767
+ masked_image_latents = self.vae.encode(masked_image).latent_dist.sample(generator=generator)
768
+ masked_image_latents = self.vae.config.scaling_factor * masked_image_latents
769
+
770
+ # duplicate masked_image_latents for each generation per prompt, using mps friendly method
771
+ if masked_image_latents.shape[0] < batch_size:
772
+ if not batch_size % masked_image_latents.shape[0] == 0:
773
+ raise ValueError(
774
+ "The passed images and the required batch size don't match. Images are supposed to be duplicated"
775
+ f" to a total batch size of {batch_size}, but {masked_image_latents.shape[0]} images were passed."
776
+ " Make sure the number of images that you pass is divisible by the total requested batch size."
777
+ )
778
+ masked_image_latents = masked_image_latents.repeat(batch_size // masked_image_latents.shape[0], 1, 1, 1)
779
+
780
+ masked_image_latents = (
781
+ torch.cat([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents
782
+ )
783
+
784
+ # aligning device to prevent device errors when concating it with the latent model input
785
+ masked_image_latents = masked_image_latents.to(device=device, dtype=dtype)
786
+ return masked_image_latents
787
+
788
+ def _default_height_width(self, height, width, image):
789
+ if isinstance(image, list):
790
+ image = image[0]
791
+
792
+ if height is None:
793
+ if isinstance(image, PIL.Image.Image):
794
+ height = image.height
795
+ elif isinstance(image, torch.Tensor):
796
+ height = image.shape[3]
797
+
798
+ height = (height // 8) * 8 # round down to nearest multiple of 8
799
+
800
+ if width is None:
801
+ if isinstance(image, PIL.Image.Image):
802
+ width = image.width
803
+ elif isinstance(image, torch.Tensor):
804
+ width = image.shape[2]
805
+
806
+ width = (width // 8) * 8 # round down to nearest multiple of 8
807
+
808
+ return height, width
809
+
810
+ @torch.no_grad()
811
+ @replace_example_docstring(EXAMPLE_DOC_STRING)
812
+ def __call__(
813
+ self,
814
+ prompt: Union[str, List[str]] = None,
815
+ image: Union[torch.Tensor, PIL.Image.Image] = None,
816
+ mask_image: Union[torch.Tensor, PIL.Image.Image] = None,
817
+ controlnet_conditioning_image: Union[
818
+ torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image]
819
+ ] = None,
820
+ strength: float = 0.8,
821
+ height: Optional[int] = None,
822
+ width: Optional[int] = None,
823
+ num_inference_steps: int = 50,
824
+ guidance_scale: float = 7.5,
825
+ negative_prompt: Optional[Union[str, List[str]]] = None,
826
+ num_images_per_prompt: Optional[int] = 1,
827
+ eta: float = 0.0,
828
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
829
+ latents: Optional[torch.FloatTensor] = None,
830
+ prompt_embeds: Optional[torch.FloatTensor] = None,
831
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
832
+ output_type: Optional[str] = "pil",
833
+ return_dict: bool = True,
834
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
835
+ callback_steps: int = 1,
836
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
837
+ controlnet_conditioning_scale: float = 1.0,
838
+ ):
839
+ r"""
840
+ Function invoked when calling the pipeline for generation.
841
+
842
+ Args:
843
+ prompt (`str` or `List[str]`, *optional*):
844
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
845
+ instead.
846
+ image (`torch.Tensor` or `PIL.Image.Image`):
847
+ `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
848
+ be masked out with `mask_image` and repainted according to `prompt`.
849
+ mask_image (`torch.Tensor` or `PIL.Image.Image`):
850
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
851
+ repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
852
+ to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
853
+ instead of 3, so the expected shape would be `(B, H, W, 1)`.
854
+ controlnet_conditioning_image (`torch.FloatTensor`, `PIL.Image.Image`, `List[torch.FloatTensor]` or `List[PIL.Image.Image]`):
855
+ The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If
856
+ the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. PIL.Image.Image` can
857
+ also be accepted as an image. The control image is automatically resized to fit the output image.
858
+ strength (`float`, *optional*):
859
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
860
+ will be used as a starting point, adding more noise to it the larger the `strength`. The number of
861
+ denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
862
+ be maximum and the denoising process will run for the full number of iterations specified in
863
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
864
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
865
+ The height in pixels of the generated image.
866
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
867
+ The width in pixels of the generated image.
868
+ num_inference_steps (`int`, *optional*, defaults to 50):
869
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
870
+ expense of slower inference.
871
+ guidance_scale (`float`, *optional*, defaults to 7.5):
872
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
873
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
874
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
875
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
876
+ usually at the expense of lower image quality.
877
+ negative_prompt (`str` or `List[str]`, *optional*):
878
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead.
879
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
880
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
881
+ The number of images to generate per prompt.
882
+ eta (`float`, *optional*, defaults to 0.0):
883
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
884
+ [`schedulers.DDIMScheduler`], will be ignored for others.
885
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
886
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
887
+ to make generation deterministic.
888
+ latents (`torch.FloatTensor`, *optional*):
889
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
890
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
891
+ tensor will ge generated by sampling using the supplied random `generator`.
892
+ prompt_embeds (`torch.FloatTensor`, *optional*):
893
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
894
+ provided, text embeddings will be generated from `prompt` input argument.
895
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
896
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
897
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
898
+ argument.
899
+ output_type (`str`, *optional*, defaults to `"pil"`):
900
+ The output format of the generate image. Choose between
901
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
902
+ return_dict (`bool`, *optional*, defaults to `True`):
903
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
904
+ plain tuple.
905
+ callback (`Callable`, *optional*):
906
+ A function that will be called every `callback_steps` steps during inference. The function will be
907
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
908
+ callback_steps (`int`, *optional*, defaults to 1):
909
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
910
+ called at every step.
911
+ cross_attention_kwargs (`dict`, *optional*):
912
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
913
+ `self.processor` in
914
+ [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
915
+ controlnet_conditioning_scale (`float`, *optional*, defaults to 1.0):
916
+ The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added
917
+ to the residual in the original unet.
918
+
919
+ Examples:
920
+
921
+ Returns:
922
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
923
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
924
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
925
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
926
+ (nsfw) content, according to the `safety_checker`.
927
+ """
928
+ # 0. Default height and width to unet
929
+ height, width = self._default_height_width(height, width, controlnet_conditioning_image)
930
+
931
+ # 1. Check inputs. Raise error if not correct
932
+ self.check_inputs(
933
+ prompt,
934
+ image,
935
+ mask_image,
936
+ controlnet_conditioning_image,
937
+ height,
938
+ width,
939
+ callback_steps,
940
+ negative_prompt,
941
+ prompt_embeds,
942
+ negative_prompt_embeds,
943
+ strength,
944
+ )
945
+
946
+ # 2. Define call parameters
947
+ if prompt is not None and isinstance(prompt, str):
948
+ batch_size = 1
949
+ elif prompt is not None and isinstance(prompt, list):
950
+ batch_size = len(prompt)
951
+ else:
952
+ batch_size = prompt_embeds.shape[0]
953
+
954
+ device = self._execution_device
955
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
956
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
957
+ # corresponds to doing no classifier free guidance.
958
+ do_classifier_free_guidance = guidance_scale > 1.0
959
+
960
+ # 3. Encode input prompt
961
+ prompt_embeds = self._encode_prompt(
962
+ prompt,
963
+ device,
964
+ num_images_per_prompt,
965
+ do_classifier_free_guidance,
966
+ negative_prompt,
967
+ prompt_embeds=prompt_embeds,
968
+ negative_prompt_embeds=negative_prompt_embeds,
969
+ )
970
+
971
+ # 4. Prepare mask, image, and controlnet_conditioning_image
972
+ image = prepare_image(image)
973
+
974
+ mask_image = prepare_mask_image(mask_image)
975
+
976
+ controlnet_conditioning_image = prepare_controlnet_conditioning_image(
977
+ controlnet_conditioning_image,
978
+ width,
979
+ height,
980
+ batch_size * num_images_per_prompt,
981
+ num_images_per_prompt,
982
+ device,
983
+ self.controlnet.dtype,
984
+ )
985
+
986
+ masked_image = image * (mask_image < 0.5)
987
+
988
+ # 5. Prepare timesteps
989
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
990
+ timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device)
991
+ latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
992
+
993
+ # 6. Prepare latent variables
994
+ latents = self.prepare_latents(
995
+ image,
996
+ latent_timestep,
997
+ batch_size,
998
+ num_images_per_prompt,
999
+ prompt_embeds.dtype,
1000
+ device,
1001
+ generator,
1002
+ )
1003
+
1004
+ mask_image_latents = self.prepare_mask_latents(
1005
+ mask_image,
1006
+ batch_size * num_images_per_prompt,
1007
+ height,
1008
+ width,
1009
+ prompt_embeds.dtype,
1010
+ device,
1011
+ do_classifier_free_guidance,
1012
+ )
1013
+
1014
+ masked_image_latents = self.prepare_masked_image_latents(
1015
+ masked_image,
1016
+ batch_size * num_images_per_prompt,
1017
+ height,
1018
+ width,
1019
+ prompt_embeds.dtype,
1020
+ device,
1021
+ generator,
1022
+ do_classifier_free_guidance,
1023
+ )
1024
+
1025
+ if do_classifier_free_guidance:
1026
+ controlnet_conditioning_image = torch.cat([controlnet_conditioning_image] * 2)
1027
+
1028
+ # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
1029
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
1030
+
1031
+ # 8. Denoising loop
1032
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
1033
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
1034
+ for i, t in enumerate(timesteps):
1035
+ # expand the latents if we are doing classifier free guidance
1036
+ non_inpainting_latent_model_input = (
1037
+ torch.cat([latents] * 2) if do_classifier_free_guidance else latents
1038
+ )
1039
+
1040
+ non_inpainting_latent_model_input = self.scheduler.scale_model_input(
1041
+ non_inpainting_latent_model_input, t
1042
+ )
1043
+
1044
+ inpainting_latent_model_input = torch.cat(
1045
+ [non_inpainting_latent_model_input, mask_image_latents, masked_image_latents], dim=1
1046
+ )
1047
+
1048
+ down_block_res_samples, mid_block_res_sample = self.controlnet(
1049
+ non_inpainting_latent_model_input,
1050
+ t,
1051
+ encoder_hidden_states=prompt_embeds,
1052
+ controlnet_cond=controlnet_conditioning_image,
1053
+ return_dict=False,
1054
+ )
1055
+
1056
+ down_block_res_samples = [
1057
+ down_block_res_sample * controlnet_conditioning_scale
1058
+ for down_block_res_sample in down_block_res_samples
1059
+ ]
1060
+ mid_block_res_sample *= controlnet_conditioning_scale
1061
+
1062
+ # predict the noise residual
1063
+ noise_pred = self.unet(
1064
+ inpainting_latent_model_input,
1065
+ t,
1066
+ encoder_hidden_states=prompt_embeds,
1067
+ cross_attention_kwargs=cross_attention_kwargs,
1068
+ down_block_additional_residuals=down_block_res_samples,
1069
+ mid_block_additional_residual=mid_block_res_sample,
1070
+ ).sample
1071
+
1072
+ # perform guidance
1073
+ if do_classifier_free_guidance:
1074
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
1075
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
1076
+
1077
+ # compute the previous noisy sample x_t -> x_t-1
1078
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
1079
+
1080
+ # call the callback, if provided
1081
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
1082
+ progress_bar.update()
1083
+ if callback is not None and i % callback_steps == 0:
1084
+ callback(i, t, latents)
1085
+
1086
+ # If we do sequential model offloading, let's offload unet and controlnet
1087
+ # manually for max memory savings
1088
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
1089
+ self.unet.to("cpu")
1090
+ self.controlnet.to("cpu")
1091
+ torch.cuda.empty_cache()
1092
+
1093
+ if output_type == "latent":
1094
+ image = latents
1095
+ has_nsfw_concept = None
1096
+ elif output_type == "pil":
1097
+ # 8. Post-processing
1098
+ image = self.decode_latents(latents)
1099
+
1100
+ # 9. Run safety checker
1101
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
1102
+
1103
+ # 10. Convert to PIL
1104
+ image = self.numpy_to_pil(image)
1105
+ else:
1106
+ # 8. Post-processing
1107
+ image = self.decode_latents(latents)
1108
+
1109
+ # 9. Run safety checker
1110
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
1111
+
1112
+ # Offload last model to CPU
1113
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
1114
+ self.final_offload_hook.offload()
1115
+
1116
+ if not return_dict:
1117
+ return (image, has_nsfw_concept)
1118
+
1119
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/stable_diffusion_controlnet_reference.py ADDED
@@ -0,0 +1,834 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inspired by: https://github.com/Mikubill/sd-webui-controlnet/discussions/1236 and https://github.com/Mikubill/sd-webui-controlnet/discussions/1280
2
+ from typing import Any, Callable, Dict, List, Optional, Tuple, Union
3
+
4
+ import numpy as np
5
+ import PIL.Image
6
+ import torch
7
+
8
+ from diffusers import StableDiffusionControlNetPipeline
9
+ from diffusers.models import ControlNetModel
10
+ from diffusers.models.attention import BasicTransformerBlock
11
+ from diffusers.models.unet_2d_blocks import CrossAttnDownBlock2D, CrossAttnUpBlock2D, DownBlock2D, UpBlock2D
12
+ from diffusers.pipelines.controlnet.multicontrolnet import MultiControlNetModel
13
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
14
+ from diffusers.utils import is_compiled_module, logging, randn_tensor
15
+
16
+
17
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
18
+
19
+ EXAMPLE_DOC_STRING = """
20
+ Examples:
21
+ ```py
22
+ >>> import cv2
23
+ >>> import torch
24
+ >>> import numpy as np
25
+ >>> from PIL import Image
26
+ >>> from diffusers import UniPCMultistepScheduler
27
+ >>> from diffusers.utils import load_image
28
+
29
+ >>> input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
30
+
31
+ >>> # get canny image
32
+ >>> image = cv2.Canny(np.array(input_image), 100, 200)
33
+ >>> image = image[:, :, None]
34
+ >>> image = np.concatenate([image, image, image], axis=2)
35
+ >>> canny_image = Image.fromarray(image)
36
+
37
+ >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
38
+ >>> pipe = StableDiffusionControlNetReferencePipeline.from_pretrained(
39
+ "runwayml/stable-diffusion-v1-5",
40
+ controlnet=controlnet,
41
+ safety_checker=None,
42
+ torch_dtype=torch.float16
43
+ ).to('cuda:0')
44
+
45
+ >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe_controlnet.scheduler.config)
46
+
47
+ >>> result_img = pipe(ref_image=input_image,
48
+ prompt="1girl",
49
+ image=canny_image,
50
+ num_inference_steps=20,
51
+ reference_attn=True,
52
+ reference_adain=True).images[0]
53
+
54
+ >>> result_img.show()
55
+ ```
56
+ """
57
+
58
+
59
+ def torch_dfs(model: torch.nn.Module):
60
+ result = [model]
61
+ for child in model.children():
62
+ result += torch_dfs(child)
63
+ return result
64
+
65
+
66
+ class StableDiffusionControlNetReferencePipeline(StableDiffusionControlNetPipeline):
67
+ def prepare_ref_latents(self, refimage, batch_size, dtype, device, generator, do_classifier_free_guidance):
68
+ refimage = refimage.to(device=device, dtype=dtype)
69
+
70
+ # encode the mask image into latents space so we can concatenate it to the latents
71
+ if isinstance(generator, list):
72
+ ref_image_latents = [
73
+ self.vae.encode(refimage[i : i + 1]).latent_dist.sample(generator=generator[i])
74
+ for i in range(batch_size)
75
+ ]
76
+ ref_image_latents = torch.cat(ref_image_latents, dim=0)
77
+ else:
78
+ ref_image_latents = self.vae.encode(refimage).latent_dist.sample(generator=generator)
79
+ ref_image_latents = self.vae.config.scaling_factor * ref_image_latents
80
+
81
+ # duplicate mask and ref_image_latents for each generation per prompt, using mps friendly method
82
+ if ref_image_latents.shape[0] < batch_size:
83
+ if not batch_size % ref_image_latents.shape[0] == 0:
84
+ raise ValueError(
85
+ "The passed images and the required batch size don't match. Images are supposed to be duplicated"
86
+ f" to a total batch size of {batch_size}, but {ref_image_latents.shape[0]} images were passed."
87
+ " Make sure the number of images that you pass is divisible by the total requested batch size."
88
+ )
89
+ ref_image_latents = ref_image_latents.repeat(batch_size // ref_image_latents.shape[0], 1, 1, 1)
90
+
91
+ ref_image_latents = torch.cat([ref_image_latents] * 2) if do_classifier_free_guidance else ref_image_latents
92
+
93
+ # aligning device to prevent device errors when concating it with the latent model input
94
+ ref_image_latents = ref_image_latents.to(device=device, dtype=dtype)
95
+ return ref_image_latents
96
+
97
+ @torch.no_grad()
98
+ def __call__(
99
+ self,
100
+ prompt: Union[str, List[str]] = None,
101
+ image: Union[
102
+ torch.FloatTensor,
103
+ PIL.Image.Image,
104
+ np.ndarray,
105
+ List[torch.FloatTensor],
106
+ List[PIL.Image.Image],
107
+ List[np.ndarray],
108
+ ] = None,
109
+ ref_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
110
+ height: Optional[int] = None,
111
+ width: Optional[int] = None,
112
+ num_inference_steps: int = 50,
113
+ guidance_scale: float = 7.5,
114
+ negative_prompt: Optional[Union[str, List[str]]] = None,
115
+ num_images_per_prompt: Optional[int] = 1,
116
+ eta: float = 0.0,
117
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
118
+ latents: Optional[torch.FloatTensor] = None,
119
+ prompt_embeds: Optional[torch.FloatTensor] = None,
120
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
121
+ output_type: Optional[str] = "pil",
122
+ return_dict: bool = True,
123
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
124
+ callback_steps: int = 1,
125
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
126
+ controlnet_conditioning_scale: Union[float, List[float]] = 1.0,
127
+ guess_mode: bool = False,
128
+ attention_auto_machine_weight: float = 1.0,
129
+ gn_auto_machine_weight: float = 1.0,
130
+ style_fidelity: float = 0.5,
131
+ reference_attn: bool = True,
132
+ reference_adain: bool = True,
133
+ ):
134
+ r"""
135
+ Function invoked when calling the pipeline for generation.
136
+
137
+ Args:
138
+ prompt (`str` or `List[str]`, *optional*):
139
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
140
+ instead.
141
+ image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`,:
142
+ `List[List[torch.FloatTensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
143
+ The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If
144
+ the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. `PIL.Image.Image` can
145
+ also be accepted as an image. The dimensions of the output image defaults to `image`'s dimensions. If
146
+ height and/or width are passed, `image` is resized according to them. If multiple ControlNets are
147
+ specified in init, images must be passed as a list such that each element of the list can be correctly
148
+ batched for input to a single controlnet.
149
+ ref_image (`torch.FloatTensor`, `PIL.Image.Image`):
150
+ The Reference Control input condition. Reference Control uses this input condition to generate guidance to Unet. If
151
+ the type is specified as `Torch.FloatTensor`, it is passed to Reference Control as is. `PIL.Image.Image` can
152
+ also be accepted as an image.
153
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
154
+ The height in pixels of the generated image.
155
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
156
+ The width in pixels of the generated image.
157
+ num_inference_steps (`int`, *optional*, defaults to 50):
158
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
159
+ expense of slower inference.
160
+ guidance_scale (`float`, *optional*, defaults to 7.5):
161
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
162
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
163
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
164
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
165
+ usually at the expense of lower image quality.
166
+ negative_prompt (`str` or `List[str]`, *optional*):
167
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
168
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
169
+ less than `1`).
170
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
171
+ The number of images to generate per prompt.
172
+ eta (`float`, *optional*, defaults to 0.0):
173
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
174
+ [`schedulers.DDIMScheduler`], will be ignored for others.
175
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
176
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
177
+ to make generation deterministic.
178
+ latents (`torch.FloatTensor`, *optional*):
179
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
180
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
181
+ tensor will ge generated by sampling using the supplied random `generator`.
182
+ prompt_embeds (`torch.FloatTensor`, *optional*):
183
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
184
+ provided, text embeddings will be generated from `prompt` input argument.
185
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
186
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
187
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
188
+ argument.
189
+ output_type (`str`, *optional*, defaults to `"pil"`):
190
+ The output format of the generate image. Choose between
191
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
192
+ return_dict (`bool`, *optional*, defaults to `True`):
193
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
194
+ plain tuple.
195
+ callback (`Callable`, *optional*):
196
+ A function that will be called every `callback_steps` steps during inference. The function will be
197
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
198
+ callback_steps (`int`, *optional*, defaults to 1):
199
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
200
+ called at every step.
201
+ cross_attention_kwargs (`dict`, *optional*):
202
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
203
+ `self.processor` in
204
+ [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
205
+ controlnet_conditioning_scale (`float` or `List[float]`, *optional*, defaults to 1.0):
206
+ The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added
207
+ to the residual in the original unet. If multiple ControlNets are specified in init, you can set the
208
+ corresponding scale as a list.
209
+ guess_mode (`bool`, *optional*, defaults to `False`):
210
+ In this mode, the ControlNet encoder will try best to recognize the content of the input image even if
211
+ you remove all prompts. The `guidance_scale` between 3.0 and 5.0 is recommended.
212
+ attention_auto_machine_weight (`float`):
213
+ Weight of using reference query for self attention's context.
214
+ If attention_auto_machine_weight=1.0, use reference query for all self attention's context.
215
+ gn_auto_machine_weight (`float`):
216
+ Weight of using reference adain. If gn_auto_machine_weight=2.0, use all reference adain plugins.
217
+ style_fidelity (`float`):
218
+ style fidelity of ref_uncond_xt. If style_fidelity=1.0, control more important,
219
+ elif style_fidelity=0.0, prompt more important, else balanced.
220
+ reference_attn (`bool`):
221
+ Whether to use reference query for self attention's context.
222
+ reference_adain (`bool`):
223
+ Whether to use reference adain.
224
+
225
+ Examples:
226
+
227
+ Returns:
228
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
229
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
230
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
231
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
232
+ (nsfw) content, according to the `safety_checker`.
233
+ """
234
+ assert reference_attn or reference_adain, "`reference_attn` or `reference_adain` must be True."
235
+
236
+ # 1. Check inputs. Raise error if not correct
237
+ self.check_inputs(
238
+ prompt,
239
+ image,
240
+ callback_steps,
241
+ negative_prompt,
242
+ prompt_embeds,
243
+ negative_prompt_embeds,
244
+ controlnet_conditioning_scale,
245
+ )
246
+
247
+ # 2. Define call parameters
248
+ if prompt is not None and isinstance(prompt, str):
249
+ batch_size = 1
250
+ elif prompt is not None and isinstance(prompt, list):
251
+ batch_size = len(prompt)
252
+ else:
253
+ batch_size = prompt_embeds.shape[0]
254
+
255
+ device = self._execution_device
256
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
257
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
258
+ # corresponds to doing no classifier free guidance.
259
+ do_classifier_free_guidance = guidance_scale > 1.0
260
+
261
+ controlnet = self.controlnet._orig_mod if is_compiled_module(self.controlnet) else self.controlnet
262
+
263
+ if isinstance(controlnet, MultiControlNetModel) and isinstance(controlnet_conditioning_scale, float):
264
+ controlnet_conditioning_scale = [controlnet_conditioning_scale] * len(controlnet.nets)
265
+
266
+ global_pool_conditions = (
267
+ controlnet.config.global_pool_conditions
268
+ if isinstance(controlnet, ControlNetModel)
269
+ else controlnet.nets[0].config.global_pool_conditions
270
+ )
271
+ guess_mode = guess_mode or global_pool_conditions
272
+
273
+ # 3. Encode input prompt
274
+ text_encoder_lora_scale = (
275
+ cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
276
+ )
277
+ prompt_embeds = self._encode_prompt(
278
+ prompt,
279
+ device,
280
+ num_images_per_prompt,
281
+ do_classifier_free_guidance,
282
+ negative_prompt,
283
+ prompt_embeds=prompt_embeds,
284
+ negative_prompt_embeds=negative_prompt_embeds,
285
+ lora_scale=text_encoder_lora_scale,
286
+ )
287
+
288
+ # 4. Prepare image
289
+ if isinstance(controlnet, ControlNetModel):
290
+ image = self.prepare_image(
291
+ image=image,
292
+ width=width,
293
+ height=height,
294
+ batch_size=batch_size * num_images_per_prompt,
295
+ num_images_per_prompt=num_images_per_prompt,
296
+ device=device,
297
+ dtype=controlnet.dtype,
298
+ do_classifier_free_guidance=do_classifier_free_guidance,
299
+ guess_mode=guess_mode,
300
+ )
301
+ height, width = image.shape[-2:]
302
+ elif isinstance(controlnet, MultiControlNetModel):
303
+ images = []
304
+
305
+ for image_ in image:
306
+ image_ = self.prepare_image(
307
+ image=image_,
308
+ width=width,
309
+ height=height,
310
+ batch_size=batch_size * num_images_per_prompt,
311
+ num_images_per_prompt=num_images_per_prompt,
312
+ device=device,
313
+ dtype=controlnet.dtype,
314
+ do_classifier_free_guidance=do_classifier_free_guidance,
315
+ guess_mode=guess_mode,
316
+ )
317
+
318
+ images.append(image_)
319
+
320
+ image = images
321
+ height, width = image[0].shape[-2:]
322
+ else:
323
+ assert False
324
+
325
+ # 5. Preprocess reference image
326
+ ref_image = self.prepare_image(
327
+ image=ref_image,
328
+ width=width,
329
+ height=height,
330
+ batch_size=batch_size * num_images_per_prompt,
331
+ num_images_per_prompt=num_images_per_prompt,
332
+ device=device,
333
+ dtype=prompt_embeds.dtype,
334
+ )
335
+
336
+ # 6. Prepare timesteps
337
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
338
+ timesteps = self.scheduler.timesteps
339
+
340
+ # 7. Prepare latent variables
341
+ num_channels_latents = self.unet.config.in_channels
342
+ latents = self.prepare_latents(
343
+ batch_size * num_images_per_prompt,
344
+ num_channels_latents,
345
+ height,
346
+ width,
347
+ prompt_embeds.dtype,
348
+ device,
349
+ generator,
350
+ latents,
351
+ )
352
+
353
+ # 8. Prepare reference latent variables
354
+ ref_image_latents = self.prepare_ref_latents(
355
+ ref_image,
356
+ batch_size * num_images_per_prompt,
357
+ prompt_embeds.dtype,
358
+ device,
359
+ generator,
360
+ do_classifier_free_guidance,
361
+ )
362
+
363
+ # 9. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
364
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
365
+
366
+ # 9. Modify self attention and group norm
367
+ MODE = "write"
368
+ uc_mask = (
369
+ torch.Tensor([1] * batch_size * num_images_per_prompt + [0] * batch_size * num_images_per_prompt)
370
+ .type_as(ref_image_latents)
371
+ .bool()
372
+ )
373
+
374
+ def hacked_basic_transformer_inner_forward(
375
+ self,
376
+ hidden_states: torch.FloatTensor,
377
+ attention_mask: Optional[torch.FloatTensor] = None,
378
+ encoder_hidden_states: Optional[torch.FloatTensor] = None,
379
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
380
+ timestep: Optional[torch.LongTensor] = None,
381
+ cross_attention_kwargs: Dict[str, Any] = None,
382
+ class_labels: Optional[torch.LongTensor] = None,
383
+ ):
384
+ if self.use_ada_layer_norm:
385
+ norm_hidden_states = self.norm1(hidden_states, timestep)
386
+ elif self.use_ada_layer_norm_zero:
387
+ norm_hidden_states, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.norm1(
388
+ hidden_states, timestep, class_labels, hidden_dtype=hidden_states.dtype
389
+ )
390
+ else:
391
+ norm_hidden_states = self.norm1(hidden_states)
392
+
393
+ # 1. Self-Attention
394
+ cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {}
395
+ if self.only_cross_attention:
396
+ attn_output = self.attn1(
397
+ norm_hidden_states,
398
+ encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
399
+ attention_mask=attention_mask,
400
+ **cross_attention_kwargs,
401
+ )
402
+ else:
403
+ if MODE == "write":
404
+ self.bank.append(norm_hidden_states.detach().clone())
405
+ attn_output = self.attn1(
406
+ norm_hidden_states,
407
+ encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
408
+ attention_mask=attention_mask,
409
+ **cross_attention_kwargs,
410
+ )
411
+ if MODE == "read":
412
+ if attention_auto_machine_weight > self.attn_weight:
413
+ attn_output_uc = self.attn1(
414
+ norm_hidden_states,
415
+ encoder_hidden_states=torch.cat([norm_hidden_states] + self.bank, dim=1),
416
+ # attention_mask=attention_mask,
417
+ **cross_attention_kwargs,
418
+ )
419
+ attn_output_c = attn_output_uc.clone()
420
+ if do_classifier_free_guidance and style_fidelity > 0:
421
+ attn_output_c[uc_mask] = self.attn1(
422
+ norm_hidden_states[uc_mask],
423
+ encoder_hidden_states=norm_hidden_states[uc_mask],
424
+ **cross_attention_kwargs,
425
+ )
426
+ attn_output = style_fidelity * attn_output_c + (1.0 - style_fidelity) * attn_output_uc
427
+ self.bank.clear()
428
+ else:
429
+ attn_output = self.attn1(
430
+ norm_hidden_states,
431
+ encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
432
+ attention_mask=attention_mask,
433
+ **cross_attention_kwargs,
434
+ )
435
+ if self.use_ada_layer_norm_zero:
436
+ attn_output = gate_msa.unsqueeze(1) * attn_output
437
+ hidden_states = attn_output + hidden_states
438
+
439
+ if self.attn2 is not None:
440
+ norm_hidden_states = (
441
+ self.norm2(hidden_states, timestep) if self.use_ada_layer_norm else self.norm2(hidden_states)
442
+ )
443
+
444
+ # 2. Cross-Attention
445
+ attn_output = self.attn2(
446
+ norm_hidden_states,
447
+ encoder_hidden_states=encoder_hidden_states,
448
+ attention_mask=encoder_attention_mask,
449
+ **cross_attention_kwargs,
450
+ )
451
+ hidden_states = attn_output + hidden_states
452
+
453
+ # 3. Feed-forward
454
+ norm_hidden_states = self.norm3(hidden_states)
455
+
456
+ if self.use_ada_layer_norm_zero:
457
+ norm_hidden_states = norm_hidden_states * (1 + scale_mlp[:, None]) + shift_mlp[:, None]
458
+
459
+ ff_output = self.ff(norm_hidden_states)
460
+
461
+ if self.use_ada_layer_norm_zero:
462
+ ff_output = gate_mlp.unsqueeze(1) * ff_output
463
+
464
+ hidden_states = ff_output + hidden_states
465
+
466
+ return hidden_states
467
+
468
+ def hacked_mid_forward(self, *args, **kwargs):
469
+ eps = 1e-6
470
+ x = self.original_forward(*args, **kwargs)
471
+ if MODE == "write":
472
+ if gn_auto_machine_weight >= self.gn_weight:
473
+ var, mean = torch.var_mean(x, dim=(2, 3), keepdim=True, correction=0)
474
+ self.mean_bank.append(mean)
475
+ self.var_bank.append(var)
476
+ if MODE == "read":
477
+ if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
478
+ var, mean = torch.var_mean(x, dim=(2, 3), keepdim=True, correction=0)
479
+ std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
480
+ mean_acc = sum(self.mean_bank) / float(len(self.mean_bank))
481
+ var_acc = sum(self.var_bank) / float(len(self.var_bank))
482
+ std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
483
+ x_uc = (((x - mean) / std) * std_acc) + mean_acc
484
+ x_c = x_uc.clone()
485
+ if do_classifier_free_guidance and style_fidelity > 0:
486
+ x_c[uc_mask] = x[uc_mask]
487
+ x = style_fidelity * x_c + (1.0 - style_fidelity) * x_uc
488
+ self.mean_bank = []
489
+ self.var_bank = []
490
+ return x
491
+
492
+ def hack_CrossAttnDownBlock2D_forward(
493
+ self,
494
+ hidden_states: torch.FloatTensor,
495
+ temb: Optional[torch.FloatTensor] = None,
496
+ encoder_hidden_states: Optional[torch.FloatTensor] = None,
497
+ attention_mask: Optional[torch.FloatTensor] = None,
498
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
499
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
500
+ ):
501
+ eps = 1e-6
502
+
503
+ # TODO(Patrick, William) - attention mask is not used
504
+ output_states = ()
505
+
506
+ for i, (resnet, attn) in enumerate(zip(self.resnets, self.attentions)):
507
+ hidden_states = resnet(hidden_states, temb)
508
+ hidden_states = attn(
509
+ hidden_states,
510
+ encoder_hidden_states=encoder_hidden_states,
511
+ cross_attention_kwargs=cross_attention_kwargs,
512
+ attention_mask=attention_mask,
513
+ encoder_attention_mask=encoder_attention_mask,
514
+ return_dict=False,
515
+ )[0]
516
+ if MODE == "write":
517
+ if gn_auto_machine_weight >= self.gn_weight:
518
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
519
+ self.mean_bank.append([mean])
520
+ self.var_bank.append([var])
521
+ if MODE == "read":
522
+ if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
523
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
524
+ std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
525
+ mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
526
+ var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
527
+ std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
528
+ hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
529
+ hidden_states_c = hidden_states_uc.clone()
530
+ if do_classifier_free_guidance and style_fidelity > 0:
531
+ hidden_states_c[uc_mask] = hidden_states[uc_mask]
532
+ hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
533
+
534
+ output_states = output_states + (hidden_states,)
535
+
536
+ if MODE == "read":
537
+ self.mean_bank = []
538
+ self.var_bank = []
539
+
540
+ if self.downsamplers is not None:
541
+ for downsampler in self.downsamplers:
542
+ hidden_states = downsampler(hidden_states)
543
+
544
+ output_states = output_states + (hidden_states,)
545
+
546
+ return hidden_states, output_states
547
+
548
+ def hacked_DownBlock2D_forward(self, hidden_states, temb=None):
549
+ eps = 1e-6
550
+
551
+ output_states = ()
552
+
553
+ for i, resnet in enumerate(self.resnets):
554
+ hidden_states = resnet(hidden_states, temb)
555
+
556
+ if MODE == "write":
557
+ if gn_auto_machine_weight >= self.gn_weight:
558
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
559
+ self.mean_bank.append([mean])
560
+ self.var_bank.append([var])
561
+ if MODE == "read":
562
+ if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
563
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
564
+ std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
565
+ mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
566
+ var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
567
+ std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
568
+ hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
569
+ hidden_states_c = hidden_states_uc.clone()
570
+ if do_classifier_free_guidance and style_fidelity > 0:
571
+ hidden_states_c[uc_mask] = hidden_states[uc_mask]
572
+ hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
573
+
574
+ output_states = output_states + (hidden_states,)
575
+
576
+ if MODE == "read":
577
+ self.mean_bank = []
578
+ self.var_bank = []
579
+
580
+ if self.downsamplers is not None:
581
+ for downsampler in self.downsamplers:
582
+ hidden_states = downsampler(hidden_states)
583
+
584
+ output_states = output_states + (hidden_states,)
585
+
586
+ return hidden_states, output_states
587
+
588
+ def hacked_CrossAttnUpBlock2D_forward(
589
+ self,
590
+ hidden_states: torch.FloatTensor,
591
+ res_hidden_states_tuple: Tuple[torch.FloatTensor, ...],
592
+ temb: Optional[torch.FloatTensor] = None,
593
+ encoder_hidden_states: Optional[torch.FloatTensor] = None,
594
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
595
+ upsample_size: Optional[int] = None,
596
+ attention_mask: Optional[torch.FloatTensor] = None,
597
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
598
+ ):
599
+ eps = 1e-6
600
+ # TODO(Patrick, William) - attention mask is not used
601
+ for i, (resnet, attn) in enumerate(zip(self.resnets, self.attentions)):
602
+ # pop res hidden states
603
+ res_hidden_states = res_hidden_states_tuple[-1]
604
+ res_hidden_states_tuple = res_hidden_states_tuple[:-1]
605
+ hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
606
+ hidden_states = resnet(hidden_states, temb)
607
+ hidden_states = attn(
608
+ hidden_states,
609
+ encoder_hidden_states=encoder_hidden_states,
610
+ cross_attention_kwargs=cross_attention_kwargs,
611
+ attention_mask=attention_mask,
612
+ encoder_attention_mask=encoder_attention_mask,
613
+ return_dict=False,
614
+ )[0]
615
+
616
+ if MODE == "write":
617
+ if gn_auto_machine_weight >= self.gn_weight:
618
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
619
+ self.mean_bank.append([mean])
620
+ self.var_bank.append([var])
621
+ if MODE == "read":
622
+ if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
623
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
624
+ std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
625
+ mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
626
+ var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
627
+ std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
628
+ hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
629
+ hidden_states_c = hidden_states_uc.clone()
630
+ if do_classifier_free_guidance and style_fidelity > 0:
631
+ hidden_states_c[uc_mask] = hidden_states[uc_mask]
632
+ hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
633
+
634
+ if MODE == "read":
635
+ self.mean_bank = []
636
+ self.var_bank = []
637
+
638
+ if self.upsamplers is not None:
639
+ for upsampler in self.upsamplers:
640
+ hidden_states = upsampler(hidden_states, upsample_size)
641
+
642
+ return hidden_states
643
+
644
+ def hacked_UpBlock2D_forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None):
645
+ eps = 1e-6
646
+ for i, resnet in enumerate(self.resnets):
647
+ # pop res hidden states
648
+ res_hidden_states = res_hidden_states_tuple[-1]
649
+ res_hidden_states_tuple = res_hidden_states_tuple[:-1]
650
+ hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
651
+ hidden_states = resnet(hidden_states, temb)
652
+
653
+ if MODE == "write":
654
+ if gn_auto_machine_weight >= self.gn_weight:
655
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
656
+ self.mean_bank.append([mean])
657
+ self.var_bank.append([var])
658
+ if MODE == "read":
659
+ if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
660
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
661
+ std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
662
+ mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
663
+ var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
664
+ std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
665
+ hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
666
+ hidden_states_c = hidden_states_uc.clone()
667
+ if do_classifier_free_guidance and style_fidelity > 0:
668
+ hidden_states_c[uc_mask] = hidden_states[uc_mask]
669
+ hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
670
+
671
+ if MODE == "read":
672
+ self.mean_bank = []
673
+ self.var_bank = []
674
+
675
+ if self.upsamplers is not None:
676
+ for upsampler in self.upsamplers:
677
+ hidden_states = upsampler(hidden_states, upsample_size)
678
+
679
+ return hidden_states
680
+
681
+ if reference_attn:
682
+ attn_modules = [module for module in torch_dfs(self.unet) if isinstance(module, BasicTransformerBlock)]
683
+ attn_modules = sorted(attn_modules, key=lambda x: -x.norm1.normalized_shape[0])
684
+
685
+ for i, module in enumerate(attn_modules):
686
+ module._original_inner_forward = module.forward
687
+ module.forward = hacked_basic_transformer_inner_forward.__get__(module, BasicTransformerBlock)
688
+ module.bank = []
689
+ module.attn_weight = float(i) / float(len(attn_modules))
690
+
691
+ if reference_adain:
692
+ gn_modules = [self.unet.mid_block]
693
+ self.unet.mid_block.gn_weight = 0
694
+
695
+ down_blocks = self.unet.down_blocks
696
+ for w, module in enumerate(down_blocks):
697
+ module.gn_weight = 1.0 - float(w) / float(len(down_blocks))
698
+ gn_modules.append(module)
699
+
700
+ up_blocks = self.unet.up_blocks
701
+ for w, module in enumerate(up_blocks):
702
+ module.gn_weight = float(w) / float(len(up_blocks))
703
+ gn_modules.append(module)
704
+
705
+ for i, module in enumerate(gn_modules):
706
+ if getattr(module, "original_forward", None) is None:
707
+ module.original_forward = module.forward
708
+ if i == 0:
709
+ # mid_block
710
+ module.forward = hacked_mid_forward.__get__(module, torch.nn.Module)
711
+ elif isinstance(module, CrossAttnDownBlock2D):
712
+ module.forward = hack_CrossAttnDownBlock2D_forward.__get__(module, CrossAttnDownBlock2D)
713
+ elif isinstance(module, DownBlock2D):
714
+ module.forward = hacked_DownBlock2D_forward.__get__(module, DownBlock2D)
715
+ elif isinstance(module, CrossAttnUpBlock2D):
716
+ module.forward = hacked_CrossAttnUpBlock2D_forward.__get__(module, CrossAttnUpBlock2D)
717
+ elif isinstance(module, UpBlock2D):
718
+ module.forward = hacked_UpBlock2D_forward.__get__(module, UpBlock2D)
719
+ module.mean_bank = []
720
+ module.var_bank = []
721
+ module.gn_weight *= 2
722
+
723
+ # 11. Denoising loop
724
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
725
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
726
+ for i, t in enumerate(timesteps):
727
+ # expand the latents if we are doing classifier free guidance
728
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
729
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
730
+
731
+ # controlnet(s) inference
732
+ if guess_mode and do_classifier_free_guidance:
733
+ # Infer ControlNet only for the conditional batch.
734
+ control_model_input = latents
735
+ control_model_input = self.scheduler.scale_model_input(control_model_input, t)
736
+ controlnet_prompt_embeds = prompt_embeds.chunk(2)[1]
737
+ else:
738
+ control_model_input = latent_model_input
739
+ controlnet_prompt_embeds = prompt_embeds
740
+
741
+ down_block_res_samples, mid_block_res_sample = self.controlnet(
742
+ control_model_input,
743
+ t,
744
+ encoder_hidden_states=controlnet_prompt_embeds,
745
+ controlnet_cond=image,
746
+ conditioning_scale=controlnet_conditioning_scale,
747
+ guess_mode=guess_mode,
748
+ return_dict=False,
749
+ )
750
+
751
+ if guess_mode and do_classifier_free_guidance:
752
+ # Infered ControlNet only for the conditional batch.
753
+ # To apply the output of ControlNet to both the unconditional and conditional batches,
754
+ # add 0 to the unconditional batch to keep it unchanged.
755
+ down_block_res_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_block_res_samples]
756
+ mid_block_res_sample = torch.cat([torch.zeros_like(mid_block_res_sample), mid_block_res_sample])
757
+
758
+ # ref only part
759
+ noise = randn_tensor(
760
+ ref_image_latents.shape, generator=generator, device=device, dtype=ref_image_latents.dtype
761
+ )
762
+ ref_xt = self.scheduler.add_noise(
763
+ ref_image_latents,
764
+ noise,
765
+ t.reshape(
766
+ 1,
767
+ ),
768
+ )
769
+ ref_xt = self.scheduler.scale_model_input(ref_xt, t)
770
+
771
+ MODE = "write"
772
+ self.unet(
773
+ ref_xt,
774
+ t,
775
+ encoder_hidden_states=prompt_embeds,
776
+ cross_attention_kwargs=cross_attention_kwargs,
777
+ return_dict=False,
778
+ )
779
+
780
+ # predict the noise residual
781
+ MODE = "read"
782
+ noise_pred = self.unet(
783
+ latent_model_input,
784
+ t,
785
+ encoder_hidden_states=prompt_embeds,
786
+ cross_attention_kwargs=cross_attention_kwargs,
787
+ down_block_additional_residuals=down_block_res_samples,
788
+ mid_block_additional_residual=mid_block_res_sample,
789
+ return_dict=False,
790
+ )[0]
791
+
792
+ # perform guidance
793
+ if do_classifier_free_guidance:
794
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
795
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
796
+
797
+ # compute the previous noisy sample x_t -> x_t-1
798
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
799
+
800
+ # call the callback, if provided
801
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
802
+ progress_bar.update()
803
+ if callback is not None and i % callback_steps == 0:
804
+ callback(i, t, latents)
805
+
806
+ # If we do sequential model offloading, let's offload unet and controlnet
807
+ # manually for max memory savings
808
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
809
+ self.unet.to("cpu")
810
+ self.controlnet.to("cpu")
811
+ torch.cuda.empty_cache()
812
+
813
+ if not output_type == "latent":
814
+ image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
815
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
816
+ else:
817
+ image = latents
818
+ has_nsfw_concept = None
819
+
820
+ if has_nsfw_concept is None:
821
+ do_denormalize = [True] * image.shape[0]
822
+ else:
823
+ do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept]
824
+
825
+ image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize)
826
+
827
+ # Offload last model to CPU
828
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
829
+ self.final_offload_hook.offload()
830
+
831
+ if not return_dict:
832
+ return (image, has_nsfw_concept)
833
+
834
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/stable_diffusion_ipex.py ADDED
@@ -0,0 +1,848 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import inspect
16
+ from typing import Any, Callable, Dict, List, Optional, Union
17
+
18
+ import intel_extension_for_pytorch as ipex
19
+ import torch
20
+ from packaging import version
21
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
22
+
23
+ from diffusers.configuration_utils import FrozenDict
24
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
25
+ from diffusers.pipeline_utils import DiffusionPipeline
26
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
27
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
28
+ from diffusers.schedulers import KarrasDiffusionSchedulers
29
+ from diffusers.utils import (
30
+ deprecate,
31
+ is_accelerate_available,
32
+ is_accelerate_version,
33
+ logging,
34
+ randn_tensor,
35
+ replace_example_docstring,
36
+ )
37
+
38
+
39
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
40
+
41
+ EXAMPLE_DOC_STRING = """
42
+ Examples:
43
+ ```py
44
+ >>> import torch
45
+ >>> from diffusers import StableDiffusionPipeline
46
+
47
+ >>> pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_ipex")
48
+
49
+ >>> # For Float32
50
+ >>> pipe.prepare_for_ipex(prompt, dtype=torch.float32, height=512, width=512) #value of image height/width should be consistent with the pipeline inference
51
+ >>> # For BFloat16
52
+ >>> pipe.prepare_for_ipex(prompt, dtype=torch.bfloat16, height=512, width=512) #value of image height/width should be consistent with the pipeline inference
53
+
54
+ >>> prompt = "a photo of an astronaut riding a horse on mars"
55
+ >>> # For Float32
56
+ >>> image = pipe(prompt, num_inference_steps=num_inference_steps, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()'
57
+ >>> # For BFloat16
58
+ >>> with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
59
+ >>> image = pipe(prompt, num_inference_steps=num_inference_steps, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()'
60
+ ```
61
+ """
62
+
63
+
64
+ class StableDiffusionIPEXPipeline(DiffusionPipeline):
65
+ r"""
66
+ Pipeline for text-to-image generation using Stable Diffusion on IPEX.
67
+
68
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
69
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
70
+
71
+ Args:
72
+ vae ([`AutoencoderKL`]):
73
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
74
+ text_encoder ([`CLIPTextModel`]):
75
+ Frozen text-encoder. Stable Diffusion uses the text portion of
76
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
77
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
78
+ tokenizer (`CLIPTokenizer`):
79
+ Tokenizer of class
80
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
81
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
82
+ scheduler ([`SchedulerMixin`]):
83
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
84
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
85
+ safety_checker ([`StableDiffusionSafetyChecker`]):
86
+ Classification module that estimates whether generated images could be considered offensive or harmful.
87
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
88
+ feature_extractor ([`CLIPFeatureExtractor`]):
89
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
90
+ """
91
+ _optional_components = ["safety_checker", "feature_extractor"]
92
+
93
+ def __init__(
94
+ self,
95
+ vae: AutoencoderKL,
96
+ text_encoder: CLIPTextModel,
97
+ tokenizer: CLIPTokenizer,
98
+ unet: UNet2DConditionModel,
99
+ scheduler: KarrasDiffusionSchedulers,
100
+ safety_checker: StableDiffusionSafetyChecker,
101
+ feature_extractor: CLIPFeatureExtractor,
102
+ requires_safety_checker: bool = True,
103
+ ):
104
+ super().__init__()
105
+
106
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
107
+ deprecation_message = (
108
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
109
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
110
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
111
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
112
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
113
+ " file"
114
+ )
115
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
116
+ new_config = dict(scheduler.config)
117
+ new_config["steps_offset"] = 1
118
+ scheduler._internal_dict = FrozenDict(new_config)
119
+
120
+ if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
121
+ deprecation_message = (
122
+ f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
123
+ " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
124
+ " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
125
+ " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
126
+ " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
127
+ )
128
+ deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
129
+ new_config = dict(scheduler.config)
130
+ new_config["clip_sample"] = False
131
+ scheduler._internal_dict = FrozenDict(new_config)
132
+
133
+ if safety_checker is None and requires_safety_checker:
134
+ logger.warning(
135
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
136
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
137
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
138
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
139
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
140
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
141
+ )
142
+
143
+ if safety_checker is not None and feature_extractor is None:
144
+ raise ValueError(
145
+ "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
146
+ " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
147
+ )
148
+
149
+ is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
150
+ version.parse(unet.config._diffusers_version).base_version
151
+ ) < version.parse("0.9.0.dev0")
152
+ is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
153
+ if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
154
+ deprecation_message = (
155
+ "The configuration file of the unet has set the default `sample_size` to smaller than"
156
+ " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the"
157
+ " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
158
+ " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
159
+ " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
160
+ " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
161
+ " in the config might lead to incorrect results in future versions. If you have downloaded this"
162
+ " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
163
+ " the `unet/config.json` file"
164
+ )
165
+ deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
166
+ new_config = dict(unet.config)
167
+ new_config["sample_size"] = 64
168
+ unet._internal_dict = FrozenDict(new_config)
169
+
170
+ self.register_modules(
171
+ vae=vae,
172
+ text_encoder=text_encoder,
173
+ tokenizer=tokenizer,
174
+ unet=unet,
175
+ scheduler=scheduler,
176
+ safety_checker=safety_checker,
177
+ feature_extractor=feature_extractor,
178
+ )
179
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
180
+ self.register_to_config(requires_safety_checker=requires_safety_checker)
181
+
182
+ def get_input_example(self, prompt, height=None, width=None, guidance_scale=7.5, num_images_per_prompt=1):
183
+ prompt_embeds = None
184
+ negative_prompt_embeds = None
185
+ negative_prompt = None
186
+ callback_steps = 1
187
+ generator = None
188
+ latents = None
189
+
190
+ # 0. Default height and width to unet
191
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
192
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
193
+
194
+ # 1. Check inputs. Raise error if not correct
195
+ self.check_inputs(
196
+ prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds
197
+ )
198
+
199
+ # 2. Define call parameters
200
+ if prompt is not None and isinstance(prompt, str):
201
+ batch_size = 1
202
+ elif prompt is not None and isinstance(prompt, list):
203
+ batch_size = len(prompt)
204
+
205
+ device = "cpu"
206
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
207
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
208
+ # corresponds to doing no classifier free guidance.
209
+ do_classifier_free_guidance = guidance_scale > 1.0
210
+
211
+ # 3. Encode input prompt
212
+ prompt_embeds = self._encode_prompt(
213
+ prompt,
214
+ device,
215
+ num_images_per_prompt,
216
+ do_classifier_free_guidance,
217
+ negative_prompt,
218
+ prompt_embeds=prompt_embeds,
219
+ negative_prompt_embeds=negative_prompt_embeds,
220
+ )
221
+
222
+ # 5. Prepare latent variables
223
+ latents = self.prepare_latents(
224
+ batch_size * num_images_per_prompt,
225
+ self.unet.in_channels,
226
+ height,
227
+ width,
228
+ prompt_embeds.dtype,
229
+ device,
230
+ generator,
231
+ latents,
232
+ )
233
+ dummy = torch.ones(1, dtype=torch.int32)
234
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
235
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, dummy)
236
+
237
+ unet_input_example = (latent_model_input, dummy, prompt_embeds)
238
+ vae_decoder_input_example = latents
239
+
240
+ return unet_input_example, vae_decoder_input_example
241
+
242
+ def prepare_for_ipex(self, promt, dtype=torch.float32, height=None, width=None, guidance_scale=7.5):
243
+ self.unet = self.unet.to(memory_format=torch.channels_last)
244
+ self.vae.decoder = self.vae.decoder.to(memory_format=torch.channels_last)
245
+ self.text_encoder = self.text_encoder.to(memory_format=torch.channels_last)
246
+ if self.safety_checker is not None:
247
+ self.safety_checker = self.safety_checker.to(memory_format=torch.channels_last)
248
+
249
+ unet_input_example, vae_decoder_input_example = self.get_input_example(promt, height, width, guidance_scale)
250
+
251
+ # optimize with ipex
252
+ if dtype == torch.bfloat16:
253
+ self.unet = ipex.optimize(
254
+ self.unet.eval(), dtype=torch.bfloat16, inplace=True, sample_input=unet_input_example
255
+ )
256
+ self.vae.decoder = ipex.optimize(self.vae.decoder.eval(), dtype=torch.bfloat16, inplace=True)
257
+ self.text_encoder = ipex.optimize(self.text_encoder.eval(), dtype=torch.bfloat16, inplace=True)
258
+ if self.safety_checker is not None:
259
+ self.safety_checker = ipex.optimize(self.safety_checker.eval(), dtype=torch.bfloat16, inplace=True)
260
+ elif dtype == torch.float32:
261
+ self.unet = ipex.optimize(
262
+ self.unet.eval(),
263
+ dtype=torch.float32,
264
+ inplace=True,
265
+ sample_input=unet_input_example,
266
+ level="O1",
267
+ weights_prepack=True,
268
+ auto_kernel_selection=False,
269
+ )
270
+ self.vae.decoder = ipex.optimize(
271
+ self.vae.decoder.eval(),
272
+ dtype=torch.float32,
273
+ inplace=True,
274
+ level="O1",
275
+ weights_prepack=True,
276
+ auto_kernel_selection=False,
277
+ )
278
+ self.text_encoder = ipex.optimize(
279
+ self.text_encoder.eval(),
280
+ dtype=torch.float32,
281
+ inplace=True,
282
+ level="O1",
283
+ weights_prepack=True,
284
+ auto_kernel_selection=False,
285
+ )
286
+ if self.safety_checker is not None:
287
+ self.safety_checker = ipex.optimize(
288
+ self.safety_checker.eval(),
289
+ dtype=torch.float32,
290
+ inplace=True,
291
+ level="O1",
292
+ weights_prepack=True,
293
+ auto_kernel_selection=False,
294
+ )
295
+ else:
296
+ raise ValueError(" The value of 'dtype' should be 'torch.bfloat16' or 'torch.float32' !")
297
+
298
+ # trace unet model to get better performance on IPEX
299
+ with torch.cpu.amp.autocast(enabled=dtype == torch.bfloat16), torch.no_grad():
300
+ unet_trace_model = torch.jit.trace(self.unet, unet_input_example, check_trace=False, strict=False)
301
+ unet_trace_model = torch.jit.freeze(unet_trace_model)
302
+ self.unet.forward = unet_trace_model.forward
303
+
304
+ # trace vae.decoder model to get better performance on IPEX
305
+ with torch.cpu.amp.autocast(enabled=dtype == torch.bfloat16), torch.no_grad():
306
+ ave_decoder_trace_model = torch.jit.trace(
307
+ self.vae.decoder, vae_decoder_input_example, check_trace=False, strict=False
308
+ )
309
+ ave_decoder_trace_model = torch.jit.freeze(ave_decoder_trace_model)
310
+ self.vae.decoder.forward = ave_decoder_trace_model.forward
311
+
312
+ def enable_vae_slicing(self):
313
+ r"""
314
+ Enable sliced VAE decoding.
315
+
316
+ When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several
317
+ steps. This is useful to save some memory and allow larger batch sizes.
318
+ """
319
+ self.vae.enable_slicing()
320
+
321
+ def disable_vae_slicing(self):
322
+ r"""
323
+ Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to
324
+ computing decoding in one step.
325
+ """
326
+ self.vae.disable_slicing()
327
+
328
+ def enable_vae_tiling(self):
329
+ r"""
330
+ Enable tiled VAE decoding.
331
+
332
+ When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in
333
+ several steps. This is useful to save a large amount of memory and to allow the processing of larger images.
334
+ """
335
+ self.vae.enable_tiling()
336
+
337
+ def disable_vae_tiling(self):
338
+ r"""
339
+ Disable tiled VAE decoding. If `enable_vae_tiling` was previously invoked, this method will go back to
340
+ computing decoding in one step.
341
+ """
342
+ self.vae.disable_tiling()
343
+
344
+ def enable_sequential_cpu_offload(self, gpu_id=0):
345
+ r"""
346
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
347
+ text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
348
+ `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
349
+ Note that offloading happens on a submodule basis. Memory savings are higher than with
350
+ `enable_model_cpu_offload`, but performance is lower.
351
+ """
352
+ if is_accelerate_available() and is_accelerate_version(">=", "0.14.0"):
353
+ from accelerate import cpu_offload
354
+ else:
355
+ raise ImportError("`enable_sequential_cpu_offload` requires `accelerate v0.14.0` or higher")
356
+
357
+ device = torch.device(f"cuda:{gpu_id}")
358
+
359
+ if self.device.type != "cpu":
360
+ self.to("cpu", silence_dtype_warnings=True)
361
+ torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
362
+
363
+ for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]:
364
+ cpu_offload(cpu_offloaded_model, device)
365
+
366
+ if self.safety_checker is not None:
367
+ cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True)
368
+
369
+ def enable_model_cpu_offload(self, gpu_id=0):
370
+ r"""
371
+ Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
372
+ to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
373
+ method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
374
+ `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
375
+ """
376
+ if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
377
+ from accelerate import cpu_offload_with_hook
378
+ else:
379
+ raise ImportError("`enable_model_offload` requires `accelerate v0.17.0` or higher.")
380
+
381
+ device = torch.device(f"cuda:{gpu_id}")
382
+
383
+ if self.device.type != "cpu":
384
+ self.to("cpu", silence_dtype_warnings=True)
385
+ torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
386
+
387
+ hook = None
388
+ for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]:
389
+ _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
390
+
391
+ if self.safety_checker is not None:
392
+ _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
393
+
394
+ # We'll offload the last model manually.
395
+ self.final_offload_hook = hook
396
+
397
+ @property
398
+ def _execution_device(self):
399
+ r"""
400
+ Returns the device on which the pipeline's models will be executed. After calling
401
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
402
+ hooks.
403
+ """
404
+ if not hasattr(self.unet, "_hf_hook"):
405
+ return self.device
406
+ for module in self.unet.modules():
407
+ if (
408
+ hasattr(module, "_hf_hook")
409
+ and hasattr(module._hf_hook, "execution_device")
410
+ and module._hf_hook.execution_device is not None
411
+ ):
412
+ return torch.device(module._hf_hook.execution_device)
413
+ return self.device
414
+
415
+ def _encode_prompt(
416
+ self,
417
+ prompt,
418
+ device,
419
+ num_images_per_prompt,
420
+ do_classifier_free_guidance,
421
+ negative_prompt=None,
422
+ prompt_embeds: Optional[torch.FloatTensor] = None,
423
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
424
+ ):
425
+ r"""
426
+ Encodes the prompt into text encoder hidden states.
427
+
428
+ Args:
429
+ prompt (`str` or `List[str]`, *optional*):
430
+ prompt to be encoded
431
+ device: (`torch.device`):
432
+ torch device
433
+ num_images_per_prompt (`int`):
434
+ number of images that should be generated per prompt
435
+ do_classifier_free_guidance (`bool`):
436
+ whether to use classifier free guidance or not
437
+ negative_prompt (`str` or `List[str]`, *optional*):
438
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
439
+ `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
440
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
441
+ prompt_embeds (`torch.FloatTensor`, *optional*):
442
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
443
+ provided, text embeddings will be generated from `prompt` input argument.
444
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
445
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
446
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
447
+ argument.
448
+ """
449
+ if prompt is not None and isinstance(prompt, str):
450
+ batch_size = 1
451
+ elif prompt is not None and isinstance(prompt, list):
452
+ batch_size = len(prompt)
453
+ else:
454
+ batch_size = prompt_embeds.shape[0]
455
+
456
+ if prompt_embeds is None:
457
+ text_inputs = self.tokenizer(
458
+ prompt,
459
+ padding="max_length",
460
+ max_length=self.tokenizer.model_max_length,
461
+ truncation=True,
462
+ return_tensors="pt",
463
+ )
464
+ text_input_ids = text_inputs.input_ids
465
+ untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
466
+
467
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
468
+ text_input_ids, untruncated_ids
469
+ ):
470
+ removed_text = self.tokenizer.batch_decode(
471
+ untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
472
+ )
473
+ logger.warning(
474
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
475
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
476
+ )
477
+
478
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
479
+ attention_mask = text_inputs.attention_mask.to(device)
480
+ else:
481
+ attention_mask = None
482
+
483
+ prompt_embeds = self.text_encoder(
484
+ text_input_ids.to(device),
485
+ attention_mask=attention_mask,
486
+ )
487
+ prompt_embeds = prompt_embeds[0]
488
+
489
+ prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
490
+
491
+ bs_embed, seq_len, _ = prompt_embeds.shape
492
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
493
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
494
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
495
+
496
+ # get unconditional embeddings for classifier free guidance
497
+ if do_classifier_free_guidance and negative_prompt_embeds is None:
498
+ uncond_tokens: List[str]
499
+ if negative_prompt is None:
500
+ uncond_tokens = [""] * batch_size
501
+ elif type(prompt) is not type(negative_prompt):
502
+ raise TypeError(
503
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
504
+ f" {type(prompt)}."
505
+ )
506
+ elif isinstance(negative_prompt, str):
507
+ uncond_tokens = [negative_prompt]
508
+ elif batch_size != len(negative_prompt):
509
+ raise ValueError(
510
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
511
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
512
+ " the batch size of `prompt`."
513
+ )
514
+ else:
515
+ uncond_tokens = negative_prompt
516
+
517
+ max_length = prompt_embeds.shape[1]
518
+ uncond_input = self.tokenizer(
519
+ uncond_tokens,
520
+ padding="max_length",
521
+ max_length=max_length,
522
+ truncation=True,
523
+ return_tensors="pt",
524
+ )
525
+
526
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
527
+ attention_mask = uncond_input.attention_mask.to(device)
528
+ else:
529
+ attention_mask = None
530
+
531
+ negative_prompt_embeds = self.text_encoder(
532
+ uncond_input.input_ids.to(device),
533
+ attention_mask=attention_mask,
534
+ )
535
+ negative_prompt_embeds = negative_prompt_embeds[0]
536
+
537
+ if do_classifier_free_guidance:
538
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
539
+ seq_len = negative_prompt_embeds.shape[1]
540
+
541
+ negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
542
+
543
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
544
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
545
+
546
+ # For classifier free guidance, we need to do two forward passes.
547
+ # Here we concatenate the unconditional and text embeddings into a single batch
548
+ # to avoid doing two forward passes
549
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
550
+
551
+ return prompt_embeds
552
+
553
+ def run_safety_checker(self, image, device, dtype):
554
+ if self.safety_checker is not None:
555
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
556
+ image, has_nsfw_concept = self.safety_checker(
557
+ images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
558
+ )
559
+ else:
560
+ has_nsfw_concept = None
561
+ return image, has_nsfw_concept
562
+
563
+ def decode_latents(self, latents):
564
+ latents = 1 / self.vae.config.scaling_factor * latents
565
+ image = self.vae.decode(latents).sample
566
+ image = (image / 2 + 0.5).clamp(0, 1)
567
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
568
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
569
+ return image
570
+
571
+ def prepare_extra_step_kwargs(self, generator, eta):
572
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
573
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
574
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
575
+ # and should be between [0, 1]
576
+
577
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
578
+ extra_step_kwargs = {}
579
+ if accepts_eta:
580
+ extra_step_kwargs["eta"] = eta
581
+
582
+ # check if the scheduler accepts generator
583
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
584
+ if accepts_generator:
585
+ extra_step_kwargs["generator"] = generator
586
+ return extra_step_kwargs
587
+
588
+ def check_inputs(
589
+ self,
590
+ prompt,
591
+ height,
592
+ width,
593
+ callback_steps,
594
+ negative_prompt=None,
595
+ prompt_embeds=None,
596
+ negative_prompt_embeds=None,
597
+ ):
598
+ if height % 8 != 0 or width % 8 != 0:
599
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
600
+
601
+ if (callback_steps is None) or (
602
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
603
+ ):
604
+ raise ValueError(
605
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
606
+ f" {type(callback_steps)}."
607
+ )
608
+
609
+ if prompt is not None and prompt_embeds is not None:
610
+ raise ValueError(
611
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
612
+ " only forward one of the two."
613
+ )
614
+ elif prompt is None and prompt_embeds is None:
615
+ raise ValueError(
616
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
617
+ )
618
+ elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
619
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
620
+
621
+ if negative_prompt is not None and negative_prompt_embeds is not None:
622
+ raise ValueError(
623
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
624
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
625
+ )
626
+
627
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
628
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
629
+ raise ValueError(
630
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
631
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
632
+ f" {negative_prompt_embeds.shape}."
633
+ )
634
+
635
+ def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
636
+ shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
637
+ if isinstance(generator, list) and len(generator) != batch_size:
638
+ raise ValueError(
639
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
640
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
641
+ )
642
+
643
+ if latents is None:
644
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
645
+ else:
646
+ latents = latents.to(device)
647
+
648
+ # scale the initial noise by the standard deviation required by the scheduler
649
+ latents = latents * self.scheduler.init_noise_sigma
650
+ return latents
651
+
652
+ @torch.no_grad()
653
+ @replace_example_docstring(EXAMPLE_DOC_STRING)
654
+ def __call__(
655
+ self,
656
+ prompt: Union[str, List[str]] = None,
657
+ height: Optional[int] = None,
658
+ width: Optional[int] = None,
659
+ num_inference_steps: int = 50,
660
+ guidance_scale: float = 7.5,
661
+ negative_prompt: Optional[Union[str, List[str]]] = None,
662
+ num_images_per_prompt: Optional[int] = 1,
663
+ eta: float = 0.0,
664
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
665
+ latents: Optional[torch.FloatTensor] = None,
666
+ prompt_embeds: Optional[torch.FloatTensor] = None,
667
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
668
+ output_type: Optional[str] = "pil",
669
+ return_dict: bool = True,
670
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
671
+ callback_steps: int = 1,
672
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
673
+ ):
674
+ r"""
675
+ Function invoked when calling the pipeline for generation.
676
+
677
+ Args:
678
+ prompt (`str` or `List[str]`, *optional*):
679
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
680
+ instead.
681
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
682
+ The height in pixels of the generated image.
683
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
684
+ The width in pixels of the generated image.
685
+ num_inference_steps (`int`, *optional*, defaults to 50):
686
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
687
+ expense of slower inference.
688
+ guidance_scale (`float`, *optional*, defaults to 7.5):
689
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
690
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
691
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
692
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
693
+ usually at the expense of lower image quality.
694
+ negative_prompt (`str` or `List[str]`, *optional*):
695
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
696
+ `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
697
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
698
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
699
+ The number of images to generate per prompt.
700
+ eta (`float`, *optional*, defaults to 0.0):
701
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
702
+ [`schedulers.DDIMScheduler`], will be ignored for others.
703
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
704
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
705
+ to make generation deterministic.
706
+ latents (`torch.FloatTensor`, *optional*):
707
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
708
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
709
+ tensor will ge generated by sampling using the supplied random `generator`.
710
+ prompt_embeds (`torch.FloatTensor`, *optional*):
711
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
712
+ provided, text embeddings will be generated from `prompt` input argument.
713
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
714
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
715
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
716
+ argument.
717
+ output_type (`str`, *optional*, defaults to `"pil"`):
718
+ The output format of the generate image. Choose between
719
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
720
+ return_dict (`bool`, *optional*, defaults to `True`):
721
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
722
+ plain tuple.
723
+ callback (`Callable`, *optional*):
724
+ A function that will be called every `callback_steps` steps during inference. The function will be
725
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
726
+ callback_steps (`int`, *optional*, defaults to 1):
727
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
728
+ called at every step.
729
+ cross_attention_kwargs (`dict`, *optional*):
730
+ A kwargs dictionary that if specified is passed along to the `AttnProcessor` as defined under
731
+ `self.processor` in
732
+ [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
733
+
734
+ Examples:
735
+
736
+ Returns:
737
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
738
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
739
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
740
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
741
+ (nsfw) content, according to the `safety_checker`.
742
+ """
743
+ # 0. Default height and width to unet
744
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
745
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
746
+
747
+ # 1. Check inputs. Raise error if not correct
748
+ self.check_inputs(
749
+ prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds
750
+ )
751
+
752
+ # 2. Define call parameters
753
+ if prompt is not None and isinstance(prompt, str):
754
+ batch_size = 1
755
+ elif prompt is not None and isinstance(prompt, list):
756
+ batch_size = len(prompt)
757
+ else:
758
+ batch_size = prompt_embeds.shape[0]
759
+
760
+ device = self._execution_device
761
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
762
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
763
+ # corresponds to doing no classifier free guidance.
764
+ do_classifier_free_guidance = guidance_scale > 1.0
765
+
766
+ # 3. Encode input prompt
767
+ prompt_embeds = self._encode_prompt(
768
+ prompt,
769
+ device,
770
+ num_images_per_prompt,
771
+ do_classifier_free_guidance,
772
+ negative_prompt,
773
+ prompt_embeds=prompt_embeds,
774
+ negative_prompt_embeds=negative_prompt_embeds,
775
+ )
776
+
777
+ # 4. Prepare timesteps
778
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
779
+ timesteps = self.scheduler.timesteps
780
+
781
+ # 5. Prepare latent variables
782
+ num_channels_latents = self.unet.in_channels
783
+ latents = self.prepare_latents(
784
+ batch_size * num_images_per_prompt,
785
+ num_channels_latents,
786
+ height,
787
+ width,
788
+ prompt_embeds.dtype,
789
+ device,
790
+ generator,
791
+ latents,
792
+ )
793
+
794
+ # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
795
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
796
+
797
+ # 7. Denoising loop
798
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
799
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
800
+ for i, t in enumerate(timesteps):
801
+ # expand the latents if we are doing classifier free guidance
802
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
803
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
804
+
805
+ # predict the noise residual
806
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=prompt_embeds)["sample"]
807
+
808
+ # perform guidance
809
+ if do_classifier_free_guidance:
810
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
811
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
812
+
813
+ # compute the previous noisy sample x_t -> x_t-1
814
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
815
+
816
+ # call the callback, if provided
817
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
818
+ progress_bar.update()
819
+ if callback is not None and i % callback_steps == 0:
820
+ callback(i, t, latents)
821
+
822
+ if output_type == "latent":
823
+ image = latents
824
+ has_nsfw_concept = None
825
+ elif output_type == "pil":
826
+ # 8. Post-processing
827
+ image = self.decode_latents(latents)
828
+
829
+ # 9. Run safety checker
830
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
831
+
832
+ # 10. Convert to PIL
833
+ image = self.numpy_to_pil(image)
834
+ else:
835
+ # 8. Post-processing
836
+ image = self.decode_latents(latents)
837
+
838
+ # 9. Run safety checker
839
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
840
+
841
+ # Offload last model to CPU
842
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
843
+ self.final_offload_hook.offload()
844
+
845
+ if not return_dict:
846
+ return (image, has_nsfw_concept)
847
+
848
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/stable_diffusion_mega.py ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any, Callable, Dict, List, Optional, Union
2
+
3
+ import PIL.Image
4
+ import torch
5
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
6
+
7
+ from diffusers import (
8
+ AutoencoderKL,
9
+ DDIMScheduler,
10
+ DiffusionPipeline,
11
+ LMSDiscreteScheduler,
12
+ PNDMScheduler,
13
+ StableDiffusionImg2ImgPipeline,
14
+ StableDiffusionInpaintPipelineLegacy,
15
+ StableDiffusionPipeline,
16
+ UNet2DConditionModel,
17
+ )
18
+ from diffusers.configuration_utils import FrozenDict
19
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
20
+ from diffusers.utils import deprecate, logging
21
+
22
+
23
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
24
+
25
+
26
+ class StableDiffusionMegaPipeline(DiffusionPipeline):
27
+ r"""
28
+ Pipeline for text-to-image generation using Stable Diffusion.
29
+
30
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
31
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
32
+
33
+ Args:
34
+ vae ([`AutoencoderKL`]):
35
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
36
+ text_encoder ([`CLIPTextModel`]):
37
+ Frozen text-encoder. Stable Diffusion uses the text portion of
38
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
39
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
40
+ tokenizer (`CLIPTokenizer`):
41
+ Tokenizer of class
42
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
43
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
44
+ scheduler ([`SchedulerMixin`]):
45
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
46
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
47
+ safety_checker ([`StableDiffusionMegaSafetyChecker`]):
48
+ Classification module that estimates whether generated images could be considered offensive or harmful.
49
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
50
+ feature_extractor ([`CLIPImageProcessor`]):
51
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
52
+ """
53
+ _optional_components = ["safety_checker", "feature_extractor"]
54
+
55
+ def __init__(
56
+ self,
57
+ vae: AutoencoderKL,
58
+ text_encoder: CLIPTextModel,
59
+ tokenizer: CLIPTokenizer,
60
+ unet: UNet2DConditionModel,
61
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
62
+ safety_checker: StableDiffusionSafetyChecker,
63
+ feature_extractor: CLIPImageProcessor,
64
+ requires_safety_checker: bool = True,
65
+ ):
66
+ super().__init__()
67
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
68
+ deprecation_message = (
69
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
70
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
71
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
72
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
73
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
74
+ " file"
75
+ )
76
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
77
+ new_config = dict(scheduler.config)
78
+ new_config["steps_offset"] = 1
79
+ scheduler._internal_dict = FrozenDict(new_config)
80
+
81
+ self.register_modules(
82
+ vae=vae,
83
+ text_encoder=text_encoder,
84
+ tokenizer=tokenizer,
85
+ unet=unet,
86
+ scheduler=scheduler,
87
+ safety_checker=safety_checker,
88
+ feature_extractor=feature_extractor,
89
+ )
90
+ self.register_to_config(requires_safety_checker=requires_safety_checker)
91
+
92
+ @property
93
+ def components(self) -> Dict[str, Any]:
94
+ return {k: getattr(self, k) for k in self.config.keys() if not k.startswith("_")}
95
+
96
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
97
+ r"""
98
+ Enable sliced attention computation.
99
+
100
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
101
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
102
+
103
+ Args:
104
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
105
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
106
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
107
+ `attention_head_dim` must be a multiple of `slice_size`.
108
+ """
109
+ if slice_size == "auto":
110
+ # half the attention head size is usually a good trade-off between
111
+ # speed and memory
112
+ slice_size = self.unet.config.attention_head_dim // 2
113
+ self.unet.set_attention_slice(slice_size)
114
+
115
+ def disable_attention_slicing(self):
116
+ r"""
117
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
118
+ back to computing attention in one step.
119
+ """
120
+ # set slice_size = `None` to disable `attention slicing`
121
+ self.enable_attention_slicing(None)
122
+
123
+ @torch.no_grad()
124
+ def inpaint(
125
+ self,
126
+ prompt: Union[str, List[str]],
127
+ image: Union[torch.FloatTensor, PIL.Image.Image],
128
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image],
129
+ strength: float = 0.8,
130
+ num_inference_steps: Optional[int] = 50,
131
+ guidance_scale: Optional[float] = 7.5,
132
+ negative_prompt: Optional[Union[str, List[str]]] = None,
133
+ num_images_per_prompt: Optional[int] = 1,
134
+ eta: Optional[float] = 0.0,
135
+ generator: Optional[torch.Generator] = None,
136
+ output_type: Optional[str] = "pil",
137
+ return_dict: bool = True,
138
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
139
+ callback_steps: int = 1,
140
+ ):
141
+ # For more information on how this function works, please see: https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion#diffusers.StableDiffusionImg2ImgPipeline
142
+ return StableDiffusionInpaintPipelineLegacy(**self.components)(
143
+ prompt=prompt,
144
+ image=image,
145
+ mask_image=mask_image,
146
+ strength=strength,
147
+ num_inference_steps=num_inference_steps,
148
+ guidance_scale=guidance_scale,
149
+ negative_prompt=negative_prompt,
150
+ num_images_per_prompt=num_images_per_prompt,
151
+ eta=eta,
152
+ generator=generator,
153
+ output_type=output_type,
154
+ return_dict=return_dict,
155
+ callback=callback,
156
+ )
157
+
158
+ @torch.no_grad()
159
+ def img2img(
160
+ self,
161
+ prompt: Union[str, List[str]],
162
+ image: Union[torch.FloatTensor, PIL.Image.Image],
163
+ strength: float = 0.8,
164
+ num_inference_steps: Optional[int] = 50,
165
+ guidance_scale: Optional[float] = 7.5,
166
+ negative_prompt: Optional[Union[str, List[str]]] = None,
167
+ num_images_per_prompt: Optional[int] = 1,
168
+ eta: Optional[float] = 0.0,
169
+ generator: Optional[torch.Generator] = None,
170
+ output_type: Optional[str] = "pil",
171
+ return_dict: bool = True,
172
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
173
+ callback_steps: int = 1,
174
+ **kwargs,
175
+ ):
176
+ # For more information on how this function works, please see: https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion#diffusers.StableDiffusionImg2ImgPipeline
177
+ return StableDiffusionImg2ImgPipeline(**self.components)(
178
+ prompt=prompt,
179
+ image=image,
180
+ strength=strength,
181
+ num_inference_steps=num_inference_steps,
182
+ guidance_scale=guidance_scale,
183
+ negative_prompt=negative_prompt,
184
+ num_images_per_prompt=num_images_per_prompt,
185
+ eta=eta,
186
+ generator=generator,
187
+ output_type=output_type,
188
+ return_dict=return_dict,
189
+ callback=callback,
190
+ callback_steps=callback_steps,
191
+ )
192
+
193
+ @torch.no_grad()
194
+ def text2img(
195
+ self,
196
+ prompt: Union[str, List[str]],
197
+ height: int = 512,
198
+ width: int = 512,
199
+ num_inference_steps: int = 50,
200
+ guidance_scale: float = 7.5,
201
+ negative_prompt: Optional[Union[str, List[str]]] = None,
202
+ num_images_per_prompt: Optional[int] = 1,
203
+ eta: float = 0.0,
204
+ generator: Optional[torch.Generator] = None,
205
+ latents: Optional[torch.FloatTensor] = None,
206
+ output_type: Optional[str] = "pil",
207
+ return_dict: bool = True,
208
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
209
+ callback_steps: int = 1,
210
+ ):
211
+ # For more information on how this function https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion#diffusers.StableDiffusionPipeline
212
+ return StableDiffusionPipeline(**self.components)(
213
+ prompt=prompt,
214
+ height=height,
215
+ width=width,
216
+ num_inference_steps=num_inference_steps,
217
+ guidance_scale=guidance_scale,
218
+ negative_prompt=negative_prompt,
219
+ num_images_per_prompt=num_images_per_prompt,
220
+ eta=eta,
221
+ generator=generator,
222
+ latents=latents,
223
+ output_type=output_type,
224
+ return_dict=return_dict,
225
+ callback=callback,
226
+ callback_steps=callback_steps,
227
+ )
v0.19.2/stable_diffusion_reference.py ADDED
@@ -0,0 +1,796 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inspired by: https://github.com/Mikubill/sd-webui-controlnet/discussions/1236 and https://github.com/Mikubill/sd-webui-controlnet/discussions/1280
2
+ from typing import Any, Callable, Dict, List, Optional, Tuple, Union
3
+
4
+ import numpy as np
5
+ import PIL.Image
6
+ import torch
7
+
8
+ from diffusers import StableDiffusionPipeline
9
+ from diffusers.models.attention import BasicTransformerBlock
10
+ from diffusers.models.unet_2d_blocks import CrossAttnDownBlock2D, CrossAttnUpBlock2D, DownBlock2D, UpBlock2D
11
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
12
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import rescale_noise_cfg
13
+ from diffusers.utils import PIL_INTERPOLATION, logging, randn_tensor
14
+
15
+
16
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
17
+
18
+ EXAMPLE_DOC_STRING = """
19
+ Examples:
20
+ ```py
21
+ >>> import torch
22
+ >>> from diffusers import UniPCMultistepScheduler
23
+ >>> from diffusers.utils import load_image
24
+
25
+ >>> input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
26
+
27
+ >>> pipe = StableDiffusionReferencePipeline.from_pretrained(
28
+ "runwayml/stable-diffusion-v1-5",
29
+ safety_checker=None,
30
+ torch_dtype=torch.float16
31
+ ).to('cuda:0')
32
+
33
+ >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe_controlnet.scheduler.config)
34
+
35
+ >>> result_img = pipe(ref_image=input_image,
36
+ prompt="1girl",
37
+ num_inference_steps=20,
38
+ reference_attn=True,
39
+ reference_adain=True).images[0]
40
+
41
+ >>> result_img.show()
42
+ ```
43
+ """
44
+
45
+
46
+ def torch_dfs(model: torch.nn.Module):
47
+ result = [model]
48
+ for child in model.children():
49
+ result += torch_dfs(child)
50
+ return result
51
+
52
+
53
+ class StableDiffusionReferencePipeline(StableDiffusionPipeline):
54
+ def _default_height_width(self, height, width, image):
55
+ # NOTE: It is possible that a list of images have different
56
+ # dimensions for each image, so just checking the first image
57
+ # is not _exactly_ correct, but it is simple.
58
+ while isinstance(image, list):
59
+ image = image[0]
60
+
61
+ if height is None:
62
+ if isinstance(image, PIL.Image.Image):
63
+ height = image.height
64
+ elif isinstance(image, torch.Tensor):
65
+ height = image.shape[2]
66
+
67
+ height = (height // 8) * 8 # round down to nearest multiple of 8
68
+
69
+ if width is None:
70
+ if isinstance(image, PIL.Image.Image):
71
+ width = image.width
72
+ elif isinstance(image, torch.Tensor):
73
+ width = image.shape[3]
74
+
75
+ width = (width // 8) * 8 # round down to nearest multiple of 8
76
+
77
+ return height, width
78
+
79
+ def prepare_image(
80
+ self,
81
+ image,
82
+ width,
83
+ height,
84
+ batch_size,
85
+ num_images_per_prompt,
86
+ device,
87
+ dtype,
88
+ do_classifier_free_guidance=False,
89
+ guess_mode=False,
90
+ ):
91
+ if not isinstance(image, torch.Tensor):
92
+ if isinstance(image, PIL.Image.Image):
93
+ image = [image]
94
+
95
+ if isinstance(image[0], PIL.Image.Image):
96
+ images = []
97
+
98
+ for image_ in image:
99
+ image_ = image_.convert("RGB")
100
+ image_ = image_.resize((width, height), resample=PIL_INTERPOLATION["lanczos"])
101
+ image_ = np.array(image_)
102
+ image_ = image_[None, :]
103
+ images.append(image_)
104
+
105
+ image = images
106
+
107
+ image = np.concatenate(image, axis=0)
108
+ image = np.array(image).astype(np.float32) / 255.0
109
+ image = (image - 0.5) / 0.5
110
+ image = image.transpose(0, 3, 1, 2)
111
+ image = torch.from_numpy(image)
112
+ elif isinstance(image[0], torch.Tensor):
113
+ image = torch.cat(image, dim=0)
114
+
115
+ image_batch_size = image.shape[0]
116
+
117
+ if image_batch_size == 1:
118
+ repeat_by = batch_size
119
+ else:
120
+ # image batch size is the same as prompt batch size
121
+ repeat_by = num_images_per_prompt
122
+
123
+ image = image.repeat_interleave(repeat_by, dim=0)
124
+
125
+ image = image.to(device=device, dtype=dtype)
126
+
127
+ if do_classifier_free_guidance and not guess_mode:
128
+ image = torch.cat([image] * 2)
129
+
130
+ return image
131
+
132
+ def prepare_ref_latents(self, refimage, batch_size, dtype, device, generator, do_classifier_free_guidance):
133
+ refimage = refimage.to(device=device, dtype=dtype)
134
+
135
+ # encode the mask image into latents space so we can concatenate it to the latents
136
+ if isinstance(generator, list):
137
+ ref_image_latents = [
138
+ self.vae.encode(refimage[i : i + 1]).latent_dist.sample(generator=generator[i])
139
+ for i in range(batch_size)
140
+ ]
141
+ ref_image_latents = torch.cat(ref_image_latents, dim=0)
142
+ else:
143
+ ref_image_latents = self.vae.encode(refimage).latent_dist.sample(generator=generator)
144
+ ref_image_latents = self.vae.config.scaling_factor * ref_image_latents
145
+
146
+ # duplicate mask and ref_image_latents for each generation per prompt, using mps friendly method
147
+ if ref_image_latents.shape[0] < batch_size:
148
+ if not batch_size % ref_image_latents.shape[0] == 0:
149
+ raise ValueError(
150
+ "The passed images and the required batch size don't match. Images are supposed to be duplicated"
151
+ f" to a total batch size of {batch_size}, but {ref_image_latents.shape[0]} images were passed."
152
+ " Make sure the number of images that you pass is divisible by the total requested batch size."
153
+ )
154
+ ref_image_latents = ref_image_latents.repeat(batch_size // ref_image_latents.shape[0], 1, 1, 1)
155
+
156
+ ref_image_latents = torch.cat([ref_image_latents] * 2) if do_classifier_free_guidance else ref_image_latents
157
+
158
+ # aligning device to prevent device errors when concating it with the latent model input
159
+ ref_image_latents = ref_image_latents.to(device=device, dtype=dtype)
160
+ return ref_image_latents
161
+
162
+ @torch.no_grad()
163
+ def __call__(
164
+ self,
165
+ prompt: Union[str, List[str]] = None,
166
+ ref_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
167
+ height: Optional[int] = None,
168
+ width: Optional[int] = None,
169
+ num_inference_steps: int = 50,
170
+ guidance_scale: float = 7.5,
171
+ negative_prompt: Optional[Union[str, List[str]]] = None,
172
+ num_images_per_prompt: Optional[int] = 1,
173
+ eta: float = 0.0,
174
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
175
+ latents: Optional[torch.FloatTensor] = None,
176
+ prompt_embeds: Optional[torch.FloatTensor] = None,
177
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
178
+ output_type: Optional[str] = "pil",
179
+ return_dict: bool = True,
180
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
181
+ callback_steps: int = 1,
182
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
183
+ guidance_rescale: float = 0.0,
184
+ attention_auto_machine_weight: float = 1.0,
185
+ gn_auto_machine_weight: float = 1.0,
186
+ style_fidelity: float = 0.5,
187
+ reference_attn: bool = True,
188
+ reference_adain: bool = True,
189
+ ):
190
+ r"""
191
+ Function invoked when calling the pipeline for generation.
192
+
193
+ Args:
194
+ prompt (`str` or `List[str]`, *optional*):
195
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
196
+ instead.
197
+ ref_image (`torch.FloatTensor`, `PIL.Image.Image`):
198
+ The Reference Control input condition. Reference Control uses this input condition to generate guidance to Unet. If
199
+ the type is specified as `Torch.FloatTensor`, it is passed to Reference Control as is. `PIL.Image.Image` can
200
+ also be accepted as an image.
201
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
202
+ The height in pixels of the generated image.
203
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
204
+ The width in pixels of the generated image.
205
+ num_inference_steps (`int`, *optional*, defaults to 50):
206
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
207
+ expense of slower inference.
208
+ guidance_scale (`float`, *optional*, defaults to 7.5):
209
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
210
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
211
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
212
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
213
+ usually at the expense of lower image quality.
214
+ negative_prompt (`str` or `List[str]`, *optional*):
215
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
216
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
217
+ less than `1`).
218
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
219
+ The number of images to generate per prompt.
220
+ eta (`float`, *optional*, defaults to 0.0):
221
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
222
+ [`schedulers.DDIMScheduler`], will be ignored for others.
223
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
224
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
225
+ to make generation deterministic.
226
+ latents (`torch.FloatTensor`, *optional*):
227
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
228
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
229
+ tensor will ge generated by sampling using the supplied random `generator`.
230
+ prompt_embeds (`torch.FloatTensor`, *optional*):
231
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
232
+ provided, text embeddings will be generated from `prompt` input argument.
233
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
234
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
235
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
236
+ argument.
237
+ output_type (`str`, *optional*, defaults to `"pil"`):
238
+ The output format of the generate image. Choose between
239
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
240
+ return_dict (`bool`, *optional*, defaults to `True`):
241
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
242
+ plain tuple.
243
+ callback (`Callable`, *optional*):
244
+ A function that will be called every `callback_steps` steps during inference. The function will be
245
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
246
+ callback_steps (`int`, *optional*, defaults to 1):
247
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
248
+ called at every step.
249
+ cross_attention_kwargs (`dict`, *optional*):
250
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
251
+ `self.processor` in
252
+ [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
253
+ guidance_rescale (`float`, *optional*, defaults to 0.7):
254
+ Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
255
+ Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of
256
+ [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
257
+ Guidance rescale factor should fix overexposure when using zero terminal SNR.
258
+ attention_auto_machine_weight (`float`):
259
+ Weight of using reference query for self attention's context.
260
+ If attention_auto_machine_weight=1.0, use reference query for all self attention's context.
261
+ gn_auto_machine_weight (`float`):
262
+ Weight of using reference adain. If gn_auto_machine_weight=2.0, use all reference adain plugins.
263
+ style_fidelity (`float`):
264
+ style fidelity of ref_uncond_xt. If style_fidelity=1.0, control more important,
265
+ elif style_fidelity=0.0, prompt more important, else balanced.
266
+ reference_attn (`bool`):
267
+ Whether to use reference query for self attention's context.
268
+ reference_adain (`bool`):
269
+ Whether to use reference adain.
270
+
271
+ Examples:
272
+
273
+ Returns:
274
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
275
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
276
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
277
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
278
+ (nsfw) content, according to the `safety_checker`.
279
+ """
280
+ assert reference_attn or reference_adain, "`reference_attn` or `reference_adain` must be True."
281
+
282
+ # 0. Default height and width to unet
283
+ height, width = self._default_height_width(height, width, ref_image)
284
+
285
+ # 1. Check inputs. Raise error if not correct
286
+ self.check_inputs(
287
+ prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds
288
+ )
289
+
290
+ # 2. Define call parameters
291
+ if prompt is not None and isinstance(prompt, str):
292
+ batch_size = 1
293
+ elif prompt is not None and isinstance(prompt, list):
294
+ batch_size = len(prompt)
295
+ else:
296
+ batch_size = prompt_embeds.shape[0]
297
+
298
+ device = self._execution_device
299
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
300
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
301
+ # corresponds to doing no classifier free guidance.
302
+ do_classifier_free_guidance = guidance_scale > 1.0
303
+
304
+ # 3. Encode input prompt
305
+ text_encoder_lora_scale = (
306
+ cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
307
+ )
308
+ prompt_embeds = self._encode_prompt(
309
+ prompt,
310
+ device,
311
+ num_images_per_prompt,
312
+ do_classifier_free_guidance,
313
+ negative_prompt,
314
+ prompt_embeds=prompt_embeds,
315
+ negative_prompt_embeds=negative_prompt_embeds,
316
+ lora_scale=text_encoder_lora_scale,
317
+ )
318
+
319
+ # 4. Preprocess reference image
320
+ ref_image = self.prepare_image(
321
+ image=ref_image,
322
+ width=width,
323
+ height=height,
324
+ batch_size=batch_size * num_images_per_prompt,
325
+ num_images_per_prompt=num_images_per_prompt,
326
+ device=device,
327
+ dtype=prompt_embeds.dtype,
328
+ )
329
+
330
+ # 5. Prepare timesteps
331
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
332
+ timesteps = self.scheduler.timesteps
333
+
334
+ # 6. Prepare latent variables
335
+ num_channels_latents = self.unet.config.in_channels
336
+ latents = self.prepare_latents(
337
+ batch_size * num_images_per_prompt,
338
+ num_channels_latents,
339
+ height,
340
+ width,
341
+ prompt_embeds.dtype,
342
+ device,
343
+ generator,
344
+ latents,
345
+ )
346
+
347
+ # 7. Prepare reference latent variables
348
+ ref_image_latents = self.prepare_ref_latents(
349
+ ref_image,
350
+ batch_size * num_images_per_prompt,
351
+ prompt_embeds.dtype,
352
+ device,
353
+ generator,
354
+ do_classifier_free_guidance,
355
+ )
356
+
357
+ # 8. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
358
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
359
+
360
+ # 9. Modify self attention and group norm
361
+ MODE = "write"
362
+ uc_mask = (
363
+ torch.Tensor([1] * batch_size * num_images_per_prompt + [0] * batch_size * num_images_per_prompt)
364
+ .type_as(ref_image_latents)
365
+ .bool()
366
+ )
367
+
368
+ def hacked_basic_transformer_inner_forward(
369
+ self,
370
+ hidden_states: torch.FloatTensor,
371
+ attention_mask: Optional[torch.FloatTensor] = None,
372
+ encoder_hidden_states: Optional[torch.FloatTensor] = None,
373
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
374
+ timestep: Optional[torch.LongTensor] = None,
375
+ cross_attention_kwargs: Dict[str, Any] = None,
376
+ class_labels: Optional[torch.LongTensor] = None,
377
+ ):
378
+ if self.use_ada_layer_norm:
379
+ norm_hidden_states = self.norm1(hidden_states, timestep)
380
+ elif self.use_ada_layer_norm_zero:
381
+ norm_hidden_states, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.norm1(
382
+ hidden_states, timestep, class_labels, hidden_dtype=hidden_states.dtype
383
+ )
384
+ else:
385
+ norm_hidden_states = self.norm1(hidden_states)
386
+
387
+ # 1. Self-Attention
388
+ cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {}
389
+ if self.only_cross_attention:
390
+ attn_output = self.attn1(
391
+ norm_hidden_states,
392
+ encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
393
+ attention_mask=attention_mask,
394
+ **cross_attention_kwargs,
395
+ )
396
+ else:
397
+ if MODE == "write":
398
+ self.bank.append(norm_hidden_states.detach().clone())
399
+ attn_output = self.attn1(
400
+ norm_hidden_states,
401
+ encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
402
+ attention_mask=attention_mask,
403
+ **cross_attention_kwargs,
404
+ )
405
+ if MODE == "read":
406
+ if attention_auto_machine_weight > self.attn_weight:
407
+ attn_output_uc = self.attn1(
408
+ norm_hidden_states,
409
+ encoder_hidden_states=torch.cat([norm_hidden_states] + self.bank, dim=1),
410
+ # attention_mask=attention_mask,
411
+ **cross_attention_kwargs,
412
+ )
413
+ attn_output_c = attn_output_uc.clone()
414
+ if do_classifier_free_guidance and style_fidelity > 0:
415
+ attn_output_c[uc_mask] = self.attn1(
416
+ norm_hidden_states[uc_mask],
417
+ encoder_hidden_states=norm_hidden_states[uc_mask],
418
+ **cross_attention_kwargs,
419
+ )
420
+ attn_output = style_fidelity * attn_output_c + (1.0 - style_fidelity) * attn_output_uc
421
+ self.bank.clear()
422
+ else:
423
+ attn_output = self.attn1(
424
+ norm_hidden_states,
425
+ encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
426
+ attention_mask=attention_mask,
427
+ **cross_attention_kwargs,
428
+ )
429
+ if self.use_ada_layer_norm_zero:
430
+ attn_output = gate_msa.unsqueeze(1) * attn_output
431
+ hidden_states = attn_output + hidden_states
432
+
433
+ if self.attn2 is not None:
434
+ norm_hidden_states = (
435
+ self.norm2(hidden_states, timestep) if self.use_ada_layer_norm else self.norm2(hidden_states)
436
+ )
437
+
438
+ # 2. Cross-Attention
439
+ attn_output = self.attn2(
440
+ norm_hidden_states,
441
+ encoder_hidden_states=encoder_hidden_states,
442
+ attention_mask=encoder_attention_mask,
443
+ **cross_attention_kwargs,
444
+ )
445
+ hidden_states = attn_output + hidden_states
446
+
447
+ # 3. Feed-forward
448
+ norm_hidden_states = self.norm3(hidden_states)
449
+
450
+ if self.use_ada_layer_norm_zero:
451
+ norm_hidden_states = norm_hidden_states * (1 + scale_mlp[:, None]) + shift_mlp[:, None]
452
+
453
+ ff_output = self.ff(norm_hidden_states)
454
+
455
+ if self.use_ada_layer_norm_zero:
456
+ ff_output = gate_mlp.unsqueeze(1) * ff_output
457
+
458
+ hidden_states = ff_output + hidden_states
459
+
460
+ return hidden_states
461
+
462
+ def hacked_mid_forward(self, *args, **kwargs):
463
+ eps = 1e-6
464
+ x = self.original_forward(*args, **kwargs)
465
+ if MODE == "write":
466
+ if gn_auto_machine_weight >= self.gn_weight:
467
+ var, mean = torch.var_mean(x, dim=(2, 3), keepdim=True, correction=0)
468
+ self.mean_bank.append(mean)
469
+ self.var_bank.append(var)
470
+ if MODE == "read":
471
+ if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
472
+ var, mean = torch.var_mean(x, dim=(2, 3), keepdim=True, correction=0)
473
+ std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
474
+ mean_acc = sum(self.mean_bank) / float(len(self.mean_bank))
475
+ var_acc = sum(self.var_bank) / float(len(self.var_bank))
476
+ std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
477
+ x_uc = (((x - mean) / std) * std_acc) + mean_acc
478
+ x_c = x_uc.clone()
479
+ if do_classifier_free_guidance and style_fidelity > 0:
480
+ x_c[uc_mask] = x[uc_mask]
481
+ x = style_fidelity * x_c + (1.0 - style_fidelity) * x_uc
482
+ self.mean_bank = []
483
+ self.var_bank = []
484
+ return x
485
+
486
+ def hack_CrossAttnDownBlock2D_forward(
487
+ self,
488
+ hidden_states: torch.FloatTensor,
489
+ temb: Optional[torch.FloatTensor] = None,
490
+ encoder_hidden_states: Optional[torch.FloatTensor] = None,
491
+ attention_mask: Optional[torch.FloatTensor] = None,
492
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
493
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
494
+ ):
495
+ eps = 1e-6
496
+
497
+ # TODO(Patrick, William) - attention mask is not used
498
+ output_states = ()
499
+
500
+ for i, (resnet, attn) in enumerate(zip(self.resnets, self.attentions)):
501
+ hidden_states = resnet(hidden_states, temb)
502
+ hidden_states = attn(
503
+ hidden_states,
504
+ encoder_hidden_states=encoder_hidden_states,
505
+ cross_attention_kwargs=cross_attention_kwargs,
506
+ attention_mask=attention_mask,
507
+ encoder_attention_mask=encoder_attention_mask,
508
+ return_dict=False,
509
+ )[0]
510
+ if MODE == "write":
511
+ if gn_auto_machine_weight >= self.gn_weight:
512
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
513
+ self.mean_bank.append([mean])
514
+ self.var_bank.append([var])
515
+ if MODE == "read":
516
+ if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
517
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
518
+ std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
519
+ mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
520
+ var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
521
+ std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
522
+ hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
523
+ hidden_states_c = hidden_states_uc.clone()
524
+ if do_classifier_free_guidance and style_fidelity > 0:
525
+ hidden_states_c[uc_mask] = hidden_states[uc_mask]
526
+ hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
527
+
528
+ output_states = output_states + (hidden_states,)
529
+
530
+ if MODE == "read":
531
+ self.mean_bank = []
532
+ self.var_bank = []
533
+
534
+ if self.downsamplers is not None:
535
+ for downsampler in self.downsamplers:
536
+ hidden_states = downsampler(hidden_states)
537
+
538
+ output_states = output_states + (hidden_states,)
539
+
540
+ return hidden_states, output_states
541
+
542
+ def hacked_DownBlock2D_forward(self, hidden_states, temb=None):
543
+ eps = 1e-6
544
+
545
+ output_states = ()
546
+
547
+ for i, resnet in enumerate(self.resnets):
548
+ hidden_states = resnet(hidden_states, temb)
549
+
550
+ if MODE == "write":
551
+ if gn_auto_machine_weight >= self.gn_weight:
552
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
553
+ self.mean_bank.append([mean])
554
+ self.var_bank.append([var])
555
+ if MODE == "read":
556
+ if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
557
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
558
+ std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
559
+ mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
560
+ var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
561
+ std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
562
+ hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
563
+ hidden_states_c = hidden_states_uc.clone()
564
+ if do_classifier_free_guidance and style_fidelity > 0:
565
+ hidden_states_c[uc_mask] = hidden_states[uc_mask]
566
+ hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
567
+
568
+ output_states = output_states + (hidden_states,)
569
+
570
+ if MODE == "read":
571
+ self.mean_bank = []
572
+ self.var_bank = []
573
+
574
+ if self.downsamplers is not None:
575
+ for downsampler in self.downsamplers:
576
+ hidden_states = downsampler(hidden_states)
577
+
578
+ output_states = output_states + (hidden_states,)
579
+
580
+ return hidden_states, output_states
581
+
582
+ def hacked_CrossAttnUpBlock2D_forward(
583
+ self,
584
+ hidden_states: torch.FloatTensor,
585
+ res_hidden_states_tuple: Tuple[torch.FloatTensor, ...],
586
+ temb: Optional[torch.FloatTensor] = None,
587
+ encoder_hidden_states: Optional[torch.FloatTensor] = None,
588
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
589
+ upsample_size: Optional[int] = None,
590
+ attention_mask: Optional[torch.FloatTensor] = None,
591
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
592
+ ):
593
+ eps = 1e-6
594
+ # TODO(Patrick, William) - attention mask is not used
595
+ for i, (resnet, attn) in enumerate(zip(self.resnets, self.attentions)):
596
+ # pop res hidden states
597
+ res_hidden_states = res_hidden_states_tuple[-1]
598
+ res_hidden_states_tuple = res_hidden_states_tuple[:-1]
599
+ hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
600
+ hidden_states = resnet(hidden_states, temb)
601
+ hidden_states = attn(
602
+ hidden_states,
603
+ encoder_hidden_states=encoder_hidden_states,
604
+ cross_attention_kwargs=cross_attention_kwargs,
605
+ attention_mask=attention_mask,
606
+ encoder_attention_mask=encoder_attention_mask,
607
+ return_dict=False,
608
+ )[0]
609
+
610
+ if MODE == "write":
611
+ if gn_auto_machine_weight >= self.gn_weight:
612
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
613
+ self.mean_bank.append([mean])
614
+ self.var_bank.append([var])
615
+ if MODE == "read":
616
+ if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
617
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
618
+ std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
619
+ mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
620
+ var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
621
+ std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
622
+ hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
623
+ hidden_states_c = hidden_states_uc.clone()
624
+ if do_classifier_free_guidance and style_fidelity > 0:
625
+ hidden_states_c[uc_mask] = hidden_states[uc_mask]
626
+ hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
627
+
628
+ if MODE == "read":
629
+ self.mean_bank = []
630
+ self.var_bank = []
631
+
632
+ if self.upsamplers is not None:
633
+ for upsampler in self.upsamplers:
634
+ hidden_states = upsampler(hidden_states, upsample_size)
635
+
636
+ return hidden_states
637
+
638
+ def hacked_UpBlock2D_forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None):
639
+ eps = 1e-6
640
+ for i, resnet in enumerate(self.resnets):
641
+ # pop res hidden states
642
+ res_hidden_states = res_hidden_states_tuple[-1]
643
+ res_hidden_states_tuple = res_hidden_states_tuple[:-1]
644
+ hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
645
+ hidden_states = resnet(hidden_states, temb)
646
+
647
+ if MODE == "write":
648
+ if gn_auto_machine_weight >= self.gn_weight:
649
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
650
+ self.mean_bank.append([mean])
651
+ self.var_bank.append([var])
652
+ if MODE == "read":
653
+ if len(self.mean_bank) > 0 and len(self.var_bank) > 0:
654
+ var, mean = torch.var_mean(hidden_states, dim=(2, 3), keepdim=True, correction=0)
655
+ std = torch.maximum(var, torch.zeros_like(var) + eps) ** 0.5
656
+ mean_acc = sum(self.mean_bank[i]) / float(len(self.mean_bank[i]))
657
+ var_acc = sum(self.var_bank[i]) / float(len(self.var_bank[i]))
658
+ std_acc = torch.maximum(var_acc, torch.zeros_like(var_acc) + eps) ** 0.5
659
+ hidden_states_uc = (((hidden_states - mean) / std) * std_acc) + mean_acc
660
+ hidden_states_c = hidden_states_uc.clone()
661
+ if do_classifier_free_guidance and style_fidelity > 0:
662
+ hidden_states_c[uc_mask] = hidden_states[uc_mask]
663
+ hidden_states = style_fidelity * hidden_states_c + (1.0 - style_fidelity) * hidden_states_uc
664
+
665
+ if MODE == "read":
666
+ self.mean_bank = []
667
+ self.var_bank = []
668
+
669
+ if self.upsamplers is not None:
670
+ for upsampler in self.upsamplers:
671
+ hidden_states = upsampler(hidden_states, upsample_size)
672
+
673
+ return hidden_states
674
+
675
+ if reference_attn:
676
+ attn_modules = [module for module in torch_dfs(self.unet) if isinstance(module, BasicTransformerBlock)]
677
+ attn_modules = sorted(attn_modules, key=lambda x: -x.norm1.normalized_shape[0])
678
+
679
+ for i, module in enumerate(attn_modules):
680
+ module._original_inner_forward = module.forward
681
+ module.forward = hacked_basic_transformer_inner_forward.__get__(module, BasicTransformerBlock)
682
+ module.bank = []
683
+ module.attn_weight = float(i) / float(len(attn_modules))
684
+
685
+ if reference_adain:
686
+ gn_modules = [self.unet.mid_block]
687
+ self.unet.mid_block.gn_weight = 0
688
+
689
+ down_blocks = self.unet.down_blocks
690
+ for w, module in enumerate(down_blocks):
691
+ module.gn_weight = 1.0 - float(w) / float(len(down_blocks))
692
+ gn_modules.append(module)
693
+
694
+ up_blocks = self.unet.up_blocks
695
+ for w, module in enumerate(up_blocks):
696
+ module.gn_weight = float(w) / float(len(up_blocks))
697
+ gn_modules.append(module)
698
+
699
+ for i, module in enumerate(gn_modules):
700
+ if getattr(module, "original_forward", None) is None:
701
+ module.original_forward = module.forward
702
+ if i == 0:
703
+ # mid_block
704
+ module.forward = hacked_mid_forward.__get__(module, torch.nn.Module)
705
+ elif isinstance(module, CrossAttnDownBlock2D):
706
+ module.forward = hack_CrossAttnDownBlock2D_forward.__get__(module, CrossAttnDownBlock2D)
707
+ elif isinstance(module, DownBlock2D):
708
+ module.forward = hacked_DownBlock2D_forward.__get__(module, DownBlock2D)
709
+ elif isinstance(module, CrossAttnUpBlock2D):
710
+ module.forward = hacked_CrossAttnUpBlock2D_forward.__get__(module, CrossAttnUpBlock2D)
711
+ elif isinstance(module, UpBlock2D):
712
+ module.forward = hacked_UpBlock2D_forward.__get__(module, UpBlock2D)
713
+ module.mean_bank = []
714
+ module.var_bank = []
715
+ module.gn_weight *= 2
716
+
717
+ # 10. Denoising loop
718
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
719
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
720
+ for i, t in enumerate(timesteps):
721
+ # expand the latents if we are doing classifier free guidance
722
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
723
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
724
+
725
+ # ref only part
726
+ noise = randn_tensor(
727
+ ref_image_latents.shape, generator=generator, device=device, dtype=ref_image_latents.dtype
728
+ )
729
+ ref_xt = self.scheduler.add_noise(
730
+ ref_image_latents,
731
+ noise,
732
+ t.reshape(
733
+ 1,
734
+ ),
735
+ )
736
+ ref_xt = self.scheduler.scale_model_input(ref_xt, t)
737
+
738
+ MODE = "write"
739
+ self.unet(
740
+ ref_xt,
741
+ t,
742
+ encoder_hidden_states=prompt_embeds,
743
+ cross_attention_kwargs=cross_attention_kwargs,
744
+ return_dict=False,
745
+ )
746
+
747
+ # predict the noise residual
748
+ MODE = "read"
749
+ noise_pred = self.unet(
750
+ latent_model_input,
751
+ t,
752
+ encoder_hidden_states=prompt_embeds,
753
+ cross_attention_kwargs=cross_attention_kwargs,
754
+ return_dict=False,
755
+ )[0]
756
+
757
+ # perform guidance
758
+ if do_classifier_free_guidance:
759
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
760
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
761
+
762
+ if do_classifier_free_guidance and guidance_rescale > 0.0:
763
+ # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
764
+ noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
765
+
766
+ # compute the previous noisy sample x_t -> x_t-1
767
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
768
+
769
+ # call the callback, if provided
770
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
771
+ progress_bar.update()
772
+ if callback is not None and i % callback_steps == 0:
773
+ callback(i, t, latents)
774
+
775
+ if not output_type == "latent":
776
+ image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
777
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
778
+ else:
779
+ image = latents
780
+ has_nsfw_concept = None
781
+
782
+ if has_nsfw_concept is None:
783
+ do_denormalize = [True] * image.shape[0]
784
+ else:
785
+ do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept]
786
+
787
+ image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize)
788
+
789
+ # Offload last model to CPU
790
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
791
+ self.final_offload_hook.offload()
792
+
793
+ if not return_dict:
794
+ return (image, has_nsfw_concept)
795
+
796
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/stable_diffusion_repaint.py ADDED
@@ -0,0 +1,956 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import inspect
16
+ from typing import Callable, List, Optional, Union
17
+
18
+ import numpy as np
19
+ import PIL
20
+ import torch
21
+ from packaging import version
22
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
23
+
24
+ from diffusers import AutoencoderKL, DiffusionPipeline, UNet2DConditionModel
25
+ from diffusers.configuration_utils import FrozenDict, deprecate
26
+ from diffusers.loaders import LoraLoaderMixin, TextualInversionLoaderMixin
27
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
28
+ from diffusers.pipelines.stable_diffusion.safety_checker import (
29
+ StableDiffusionSafetyChecker,
30
+ )
31
+ from diffusers.schedulers import KarrasDiffusionSchedulers
32
+ from diffusers.utils import (
33
+ is_accelerate_available,
34
+ is_accelerate_version,
35
+ logging,
36
+ randn_tensor,
37
+ )
38
+
39
+
40
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
41
+
42
+
43
+ def prepare_mask_and_masked_image(image, mask):
44
+ """
45
+ Prepares a pair (image, mask) to be consumed by the Stable Diffusion pipeline. This means that those inputs will be
46
+ converted to ``torch.Tensor`` with shapes ``batch x channels x height x width`` where ``channels`` is ``3`` for the
47
+ ``image`` and ``1`` for the ``mask``.
48
+ The ``image`` will be converted to ``torch.float32`` and normalized to be in ``[-1, 1]``. The ``mask`` will be
49
+ binarized (``mask > 0.5``) and cast to ``torch.float32`` too.
50
+ Args:
51
+ image (Union[np.array, PIL.Image, torch.Tensor]): The image to inpaint.
52
+ It can be a ``PIL.Image``, or a ``height x width x 3`` ``np.array`` or a ``channels x height x width``
53
+ ``torch.Tensor`` or a ``batch x channels x height x width`` ``torch.Tensor``.
54
+ mask (_type_): The mask to apply to the image, i.e. regions to inpaint.
55
+ It can be a ``PIL.Image``, or a ``height x width`` ``np.array`` or a ``1 x height x width``
56
+ ``torch.Tensor`` or a ``batch x 1 x height x width`` ``torch.Tensor``.
57
+ Raises:
58
+ ValueError: ``torch.Tensor`` images should be in the ``[-1, 1]`` range. ValueError: ``torch.Tensor`` mask
59
+ should be in the ``[0, 1]`` range. ValueError: ``mask`` and ``image`` should have the same spatial dimensions.
60
+ TypeError: ``mask`` is a ``torch.Tensor`` but ``image`` is not
61
+ (ot the other way around).
62
+ Returns:
63
+ tuple[torch.Tensor]: The pair (mask, masked_image) as ``torch.Tensor`` with 4
64
+ dimensions: ``batch x channels x height x width``.
65
+ """
66
+ if isinstance(image, torch.Tensor):
67
+ if not isinstance(mask, torch.Tensor):
68
+ raise TypeError(f"`image` is a torch.Tensor but `mask` (type: {type(mask)} is not")
69
+
70
+ # Batch single image
71
+ if image.ndim == 3:
72
+ assert image.shape[0] == 3, "Image outside a batch should be of shape (3, H, W)"
73
+ image = image.unsqueeze(0)
74
+
75
+ # Batch and add channel dim for single mask
76
+ if mask.ndim == 2:
77
+ mask = mask.unsqueeze(0).unsqueeze(0)
78
+
79
+ # Batch single mask or add channel dim
80
+ if mask.ndim == 3:
81
+ # Single batched mask, no channel dim or single mask not batched but channel dim
82
+ if mask.shape[0] == 1:
83
+ mask = mask.unsqueeze(0)
84
+
85
+ # Batched masks no channel dim
86
+ else:
87
+ mask = mask.unsqueeze(1)
88
+
89
+ assert image.ndim == 4 and mask.ndim == 4, "Image and Mask must have 4 dimensions"
90
+ assert image.shape[-2:] == mask.shape[-2:], "Image and Mask must have the same spatial dimensions"
91
+ assert image.shape[0] == mask.shape[0], "Image and Mask must have the same batch size"
92
+
93
+ # Check image is in [-1, 1]
94
+ if image.min() < -1 or image.max() > 1:
95
+ raise ValueError("Image should be in [-1, 1] range")
96
+
97
+ # Check mask is in [0, 1]
98
+ if mask.min() < 0 or mask.max() > 1:
99
+ raise ValueError("Mask should be in [0, 1] range")
100
+
101
+ # Binarize mask
102
+ mask[mask < 0.5] = 0
103
+ mask[mask >= 0.5] = 1
104
+
105
+ # Image as float32
106
+ image = image.to(dtype=torch.float32)
107
+ elif isinstance(mask, torch.Tensor):
108
+ raise TypeError(f"`mask` is a torch.Tensor but `image` (type: {type(image)} is not")
109
+ else:
110
+ # preprocess image
111
+ if isinstance(image, (PIL.Image.Image, np.ndarray)):
112
+ image = [image]
113
+
114
+ if isinstance(image, list) and isinstance(image[0], PIL.Image.Image):
115
+ image = [np.array(i.convert("RGB"))[None, :] for i in image]
116
+ image = np.concatenate(image, axis=0)
117
+ elif isinstance(image, list) and isinstance(image[0], np.ndarray):
118
+ image = np.concatenate([i[None, :] for i in image], axis=0)
119
+
120
+ image = image.transpose(0, 3, 1, 2)
121
+ image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0
122
+
123
+ # preprocess mask
124
+ if isinstance(mask, (PIL.Image.Image, np.ndarray)):
125
+ mask = [mask]
126
+
127
+ if isinstance(mask, list) and isinstance(mask[0], PIL.Image.Image):
128
+ mask = np.concatenate([np.array(m.convert("L"))[None, None, :] for m in mask], axis=0)
129
+ mask = mask.astype(np.float32) / 255.0
130
+ elif isinstance(mask, list) and isinstance(mask[0], np.ndarray):
131
+ mask = np.concatenate([m[None, None, :] for m in mask], axis=0)
132
+
133
+ mask[mask < 0.5] = 0
134
+ mask[mask >= 0.5] = 1
135
+ mask = torch.from_numpy(mask)
136
+
137
+ # masked_image = image * (mask >= 0.5)
138
+ masked_image = image
139
+
140
+ return mask, masked_image
141
+
142
+
143
+ class StableDiffusionRepaintPipeline(DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin):
144
+ r"""
145
+ Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*.
146
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
147
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
148
+ In addition the pipeline inherits the following loading methods:
149
+ - *Textual-Inversion*: [`loaders.TextualInversionLoaderMixin.load_textual_inversion`]
150
+ - *LoRA*: [`loaders.LoraLoaderMixin.load_lora_weights`]
151
+ as well as the following saving methods:
152
+ - *LoRA*: [`loaders.LoraLoaderMixin.save_lora_weights`]
153
+ Args:
154
+ vae ([`AutoencoderKL`]):
155
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
156
+ text_encoder ([`CLIPTextModel`]):
157
+ Frozen text-encoder. Stable Diffusion uses the text portion of
158
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
159
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
160
+ tokenizer (`CLIPTokenizer`):
161
+ Tokenizer of class
162
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
163
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
164
+ scheduler ([`SchedulerMixin`]):
165
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
166
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
167
+ safety_checker ([`StableDiffusionSafetyChecker`]):
168
+ Classification module that estimates whether generated images could be considered offensive or harmful.
169
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
170
+ feature_extractor ([`CLIPImageProcessor`]):
171
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
172
+ """
173
+ _optional_components = ["safety_checker", "feature_extractor"]
174
+
175
+ def __init__(
176
+ self,
177
+ vae: AutoencoderKL,
178
+ text_encoder: CLIPTextModel,
179
+ tokenizer: CLIPTokenizer,
180
+ unet: UNet2DConditionModel,
181
+ scheduler: KarrasDiffusionSchedulers,
182
+ safety_checker: StableDiffusionSafetyChecker,
183
+ feature_extractor: CLIPImageProcessor,
184
+ requires_safety_checker: bool = True,
185
+ ):
186
+ super().__init__()
187
+
188
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
189
+ deprecation_message = (
190
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
191
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
192
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
193
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
194
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
195
+ " file"
196
+ )
197
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
198
+ new_config = dict(scheduler.config)
199
+ new_config["steps_offset"] = 1
200
+ scheduler._internal_dict = FrozenDict(new_config)
201
+
202
+ if hasattr(scheduler.config, "skip_prk_steps") and scheduler.config.skip_prk_steps is False:
203
+ deprecation_message = (
204
+ f"The configuration file of this scheduler: {scheduler} has not set the configuration"
205
+ " `skip_prk_steps`. `skip_prk_steps` should be set to True in the configuration file. Please make"
206
+ " sure to update the config accordingly as not setting `skip_prk_steps` in the config might lead to"
207
+ " incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face"
208
+ " Hub, it would be very nice if you could open a Pull request for the"
209
+ " `scheduler/scheduler_config.json` file"
210
+ )
211
+ deprecate(
212
+ "skip_prk_steps not set",
213
+ "1.0.0",
214
+ deprecation_message,
215
+ standard_warn=False,
216
+ )
217
+ new_config = dict(scheduler.config)
218
+ new_config["skip_prk_steps"] = True
219
+ scheduler._internal_dict = FrozenDict(new_config)
220
+
221
+ if safety_checker is None and requires_safety_checker:
222
+ logger.warning(
223
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
224
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
225
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
226
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
227
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
228
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
229
+ )
230
+
231
+ if safety_checker is not None and feature_extractor is None:
232
+ raise ValueError(
233
+ "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
234
+ " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
235
+ )
236
+
237
+ is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
238
+ version.parse(unet.config._diffusers_version).base_version
239
+ ) < version.parse("0.9.0.dev0")
240
+ is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
241
+ if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
242
+ deprecation_message = (
243
+ "The configuration file of the unet has set the default `sample_size` to smaller than"
244
+ " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the"
245
+ " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
246
+ " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
247
+ " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
248
+ " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
249
+ " in the config might lead to incorrect results in future versions. If you have downloaded this"
250
+ " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
251
+ " the `unet/config.json` file"
252
+ )
253
+ deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
254
+ new_config = dict(unet.config)
255
+ new_config["sample_size"] = 64
256
+ unet._internal_dict = FrozenDict(new_config)
257
+ # Check shapes, assume num_channels_latents == 4, num_channels_mask == 1, num_channels_masked == 4
258
+ if unet.config.in_channels != 4:
259
+ logger.warning(
260
+ f"You have loaded a UNet with {unet.config.in_channels} input channels, whereas by default,"
261
+ f" {self.__class__} assumes that `pipeline.unet` has 4 input channels: 4 for `num_channels_latents`,"
262
+ ". If you did not intend to modify"
263
+ " this behavior, please check whether you have loaded the right checkpoint."
264
+ )
265
+
266
+ self.register_modules(
267
+ vae=vae,
268
+ text_encoder=text_encoder,
269
+ tokenizer=tokenizer,
270
+ unet=unet,
271
+ scheduler=scheduler,
272
+ safety_checker=safety_checker,
273
+ feature_extractor=feature_extractor,
274
+ )
275
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
276
+ self.register_to_config(requires_safety_checker=requires_safety_checker)
277
+
278
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_sequential_cpu_offload
279
+ def enable_sequential_cpu_offload(self, gpu_id=0):
280
+ r"""
281
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
282
+ text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
283
+ `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
284
+ Note that offloading happens on a submodule basis. Memory savings are higher than with
285
+ `enable_model_cpu_offload`, but performance is lower.
286
+ """
287
+ if is_accelerate_available() and is_accelerate_version(">=", "0.14.0"):
288
+ from accelerate import cpu_offload
289
+ else:
290
+ raise ImportError("`enable_sequential_cpu_offload` requires `accelerate v0.14.0` or higher")
291
+
292
+ device = torch.device(f"cuda:{gpu_id}")
293
+
294
+ if self.device.type != "cpu":
295
+ self.to("cpu", silence_dtype_warnings=True)
296
+ torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
297
+
298
+ for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]:
299
+ cpu_offload(cpu_offloaded_model, device)
300
+
301
+ if self.safety_checker is not None:
302
+ cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True)
303
+
304
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_model_cpu_offload
305
+ def enable_model_cpu_offload(self, gpu_id=0):
306
+ r"""
307
+ Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
308
+ to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
309
+ method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
310
+ `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
311
+ """
312
+ if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
313
+ from accelerate import cpu_offload_with_hook
314
+ else:
315
+ raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
316
+
317
+ device = torch.device(f"cuda:{gpu_id}")
318
+
319
+ if self.device.type != "cpu":
320
+ self.to("cpu", silence_dtype_warnings=True)
321
+ torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
322
+
323
+ hook = None
324
+ for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]:
325
+ _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
326
+
327
+ if self.safety_checker is not None:
328
+ _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
329
+
330
+ # We'll offload the last model manually.
331
+ self.final_offload_hook = hook
332
+
333
+ @property
334
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device
335
+ def _execution_device(self):
336
+ r"""
337
+ Returns the device on which the pipeline's models will be executed. After calling
338
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
339
+ hooks.
340
+ """
341
+ if not hasattr(self.unet, "_hf_hook"):
342
+ return self.device
343
+ for module in self.unet.modules():
344
+ if (
345
+ hasattr(module, "_hf_hook")
346
+ and hasattr(module._hf_hook, "execution_device")
347
+ and module._hf_hook.execution_device is not None
348
+ ):
349
+ return torch.device(module._hf_hook.execution_device)
350
+ return self.device
351
+
352
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt
353
+ def _encode_prompt(
354
+ self,
355
+ prompt,
356
+ device,
357
+ num_images_per_prompt,
358
+ do_classifier_free_guidance,
359
+ negative_prompt=None,
360
+ prompt_embeds: Optional[torch.FloatTensor] = None,
361
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
362
+ ):
363
+ r"""
364
+ Encodes the prompt into text encoder hidden states.
365
+ Args:
366
+ prompt (`str` or `List[str]`, *optional*):
367
+ prompt to be encoded
368
+ device: (`torch.device`):
369
+ torch device
370
+ num_images_per_prompt (`int`):
371
+ number of images that should be generated per prompt
372
+ do_classifier_free_guidance (`bool`):
373
+ whether to use classifier free guidance or not
374
+ negative_prompt (`str` or `List[str]`, *optional*):
375
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
376
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
377
+ less than `1`).
378
+ prompt_embeds (`torch.FloatTensor`, *optional*):
379
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
380
+ provided, text embeddings will be generated from `prompt` input argument.
381
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
382
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
383
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
384
+ argument.
385
+ """
386
+ if prompt is not None and isinstance(prompt, str):
387
+ batch_size = 1
388
+ elif prompt is not None and isinstance(prompt, list):
389
+ batch_size = len(prompt)
390
+ else:
391
+ batch_size = prompt_embeds.shape[0]
392
+
393
+ if prompt_embeds is None:
394
+ # textual inversion: procecss multi-vector tokens if necessary
395
+ if isinstance(self, TextualInversionLoaderMixin):
396
+ prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
397
+
398
+ text_inputs = self.tokenizer(
399
+ prompt,
400
+ padding="max_length",
401
+ max_length=self.tokenizer.model_max_length,
402
+ truncation=True,
403
+ return_tensors="pt",
404
+ )
405
+ text_input_ids = text_inputs.input_ids
406
+ untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
407
+
408
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
409
+ text_input_ids, untruncated_ids
410
+ ):
411
+ removed_text = self.tokenizer.batch_decode(
412
+ untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
413
+ )
414
+ logger.warning(
415
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
416
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
417
+ )
418
+
419
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
420
+ attention_mask = text_inputs.attention_mask.to(device)
421
+ else:
422
+ attention_mask = None
423
+
424
+ prompt_embeds = self.text_encoder(
425
+ text_input_ids.to(device),
426
+ attention_mask=attention_mask,
427
+ )
428
+ prompt_embeds = prompt_embeds[0]
429
+
430
+ prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
431
+
432
+ bs_embed, seq_len, _ = prompt_embeds.shape
433
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
434
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
435
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
436
+
437
+ # get unconditional embeddings for classifier free guidance
438
+ if do_classifier_free_guidance and negative_prompt_embeds is None:
439
+ uncond_tokens: List[str]
440
+ if negative_prompt is None:
441
+ uncond_tokens = [""] * batch_size
442
+ elif type(prompt) is not type(negative_prompt):
443
+ raise TypeError(
444
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
445
+ f" {type(prompt)}."
446
+ )
447
+ elif isinstance(negative_prompt, str):
448
+ uncond_tokens = [negative_prompt]
449
+ elif batch_size != len(negative_prompt):
450
+ raise ValueError(
451
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
452
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
453
+ " the batch size of `prompt`."
454
+ )
455
+ else:
456
+ uncond_tokens = negative_prompt
457
+
458
+ # textual inversion: procecss multi-vector tokens if necessary
459
+ if isinstance(self, TextualInversionLoaderMixin):
460
+ uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer)
461
+
462
+ max_length = prompt_embeds.shape[1]
463
+ uncond_input = self.tokenizer(
464
+ uncond_tokens,
465
+ padding="max_length",
466
+ max_length=max_length,
467
+ truncation=True,
468
+ return_tensors="pt",
469
+ )
470
+
471
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
472
+ attention_mask = uncond_input.attention_mask.to(device)
473
+ else:
474
+ attention_mask = None
475
+
476
+ negative_prompt_embeds = self.text_encoder(
477
+ uncond_input.input_ids.to(device),
478
+ attention_mask=attention_mask,
479
+ )
480
+ negative_prompt_embeds = negative_prompt_embeds[0]
481
+
482
+ if do_classifier_free_guidance:
483
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
484
+ seq_len = negative_prompt_embeds.shape[1]
485
+
486
+ negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
487
+
488
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
489
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
490
+
491
+ # For classifier free guidance, we need to do two forward passes.
492
+ # Here we concatenate the unconditional and text embeddings into a single batch
493
+ # to avoid doing two forward passes
494
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
495
+
496
+ return prompt_embeds
497
+
498
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
499
+ def run_safety_checker(self, image, device, dtype):
500
+ if self.safety_checker is not None:
501
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
502
+ image, has_nsfw_concept = self.safety_checker(
503
+ images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
504
+ )
505
+ else:
506
+ has_nsfw_concept = None
507
+ return image, has_nsfw_concept
508
+
509
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
510
+ def prepare_extra_step_kwargs(self, generator, eta):
511
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
512
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
513
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
514
+ # and should be between [0, 1]
515
+
516
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
517
+ extra_step_kwargs = {}
518
+ if accepts_eta:
519
+ extra_step_kwargs["eta"] = eta
520
+
521
+ # check if the scheduler accepts generator
522
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
523
+ if accepts_generator:
524
+ extra_step_kwargs["generator"] = generator
525
+ return extra_step_kwargs
526
+
527
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
528
+ def decode_latents(self, latents):
529
+ latents = 1 / self.vae.config.scaling_factor * latents
530
+ image = self.vae.decode(latents).sample
531
+ image = (image / 2 + 0.5).clamp(0, 1)
532
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
533
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
534
+ return image
535
+
536
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs
537
+ def check_inputs(
538
+ self,
539
+ prompt,
540
+ height,
541
+ width,
542
+ callback_steps,
543
+ negative_prompt=None,
544
+ prompt_embeds=None,
545
+ negative_prompt_embeds=None,
546
+ ):
547
+ if height % 8 != 0 or width % 8 != 0:
548
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
549
+
550
+ if (callback_steps is None) or (
551
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
552
+ ):
553
+ raise ValueError(
554
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
555
+ f" {type(callback_steps)}."
556
+ )
557
+
558
+ if prompt is not None and prompt_embeds is not None:
559
+ raise ValueError(
560
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
561
+ " only forward one of the two."
562
+ )
563
+ elif prompt is None and prompt_embeds is None:
564
+ raise ValueError(
565
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
566
+ )
567
+ elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
568
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
569
+
570
+ if negative_prompt is not None and negative_prompt_embeds is not None:
571
+ raise ValueError(
572
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
573
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
574
+ )
575
+
576
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
577
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
578
+ raise ValueError(
579
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
580
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
581
+ f" {negative_prompt_embeds.shape}."
582
+ )
583
+
584
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
585
+ def prepare_latents(
586
+ self,
587
+ batch_size,
588
+ num_channels_latents,
589
+ height,
590
+ width,
591
+ dtype,
592
+ device,
593
+ generator,
594
+ latents=None,
595
+ ):
596
+ shape = (
597
+ batch_size,
598
+ num_channels_latents,
599
+ height // self.vae_scale_factor,
600
+ width // self.vae_scale_factor,
601
+ )
602
+ if isinstance(generator, list) and len(generator) != batch_size:
603
+ raise ValueError(
604
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
605
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
606
+ )
607
+
608
+ if latents is None:
609
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
610
+ else:
611
+ latents = latents.to(device)
612
+
613
+ # scale the initial noise by the standard deviation required by the scheduler
614
+ latents = latents * self.scheduler.init_noise_sigma
615
+ return latents
616
+
617
+ def prepare_mask_latents(
618
+ self,
619
+ mask,
620
+ masked_image,
621
+ batch_size,
622
+ height,
623
+ width,
624
+ dtype,
625
+ device,
626
+ generator,
627
+ do_classifier_free_guidance,
628
+ ):
629
+ # resize the mask to latents shape as we concatenate the mask to the latents
630
+ # we do that before converting to dtype to avoid breaking in case we're using cpu_offload
631
+ # and half precision
632
+ mask = torch.nn.functional.interpolate(
633
+ mask, size=(height // self.vae_scale_factor, width // self.vae_scale_factor)
634
+ )
635
+ mask = mask.to(device=device, dtype=dtype)
636
+
637
+ masked_image = masked_image.to(device=device, dtype=dtype)
638
+
639
+ # encode the mask image into latents space so we can concatenate it to the latents
640
+ if isinstance(generator, list):
641
+ masked_image_latents = [
642
+ self.vae.encode(masked_image[i : i + 1]).latent_dist.sample(generator=generator[i])
643
+ for i in range(batch_size)
644
+ ]
645
+ masked_image_latents = torch.cat(masked_image_latents, dim=0)
646
+ else:
647
+ masked_image_latents = self.vae.encode(masked_image).latent_dist.sample(generator=generator)
648
+ masked_image_latents = self.vae.config.scaling_factor * masked_image_latents
649
+
650
+ # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method
651
+ if mask.shape[0] < batch_size:
652
+ if not batch_size % mask.shape[0] == 0:
653
+ raise ValueError(
654
+ "The passed mask and the required batch size don't match. Masks are supposed to be duplicated to"
655
+ f" a total batch size of {batch_size}, but {mask.shape[0]} masks were passed. Make sure the number"
656
+ " of masks that you pass is divisible by the total requested batch size."
657
+ )
658
+ mask = mask.repeat(batch_size // mask.shape[0], 1, 1, 1)
659
+ if masked_image_latents.shape[0] < batch_size:
660
+ if not batch_size % masked_image_latents.shape[0] == 0:
661
+ raise ValueError(
662
+ "The passed images and the required batch size don't match. Images are supposed to be duplicated"
663
+ f" to a total batch size of {batch_size}, but {masked_image_latents.shape[0]} images were passed."
664
+ " Make sure the number of images that you pass is divisible by the total requested batch size."
665
+ )
666
+ masked_image_latents = masked_image_latents.repeat(batch_size // masked_image_latents.shape[0], 1, 1, 1)
667
+
668
+ mask = torch.cat([mask] * 2) if do_classifier_free_guidance else mask
669
+ masked_image_latents = (
670
+ torch.cat([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents
671
+ )
672
+
673
+ # aligning device to prevent device errors when concating it with the latent model input
674
+ masked_image_latents = masked_image_latents.to(device=device, dtype=dtype)
675
+ return mask, masked_image_latents
676
+
677
+ @torch.no_grad()
678
+ def __call__(
679
+ self,
680
+ prompt: Union[str, List[str]] = None,
681
+ image: Union[torch.FloatTensor, PIL.Image.Image] = None,
682
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
683
+ height: Optional[int] = None,
684
+ width: Optional[int] = None,
685
+ num_inference_steps: int = 50,
686
+ jump_length: Optional[int] = 10,
687
+ jump_n_sample: Optional[int] = 10,
688
+ guidance_scale: float = 7.5,
689
+ negative_prompt: Optional[Union[str, List[str]]] = None,
690
+ num_images_per_prompt: Optional[int] = 1,
691
+ eta: float = 0.0,
692
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
693
+ latents: Optional[torch.FloatTensor] = None,
694
+ prompt_embeds: Optional[torch.FloatTensor] = None,
695
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
696
+ output_type: Optional[str] = "pil",
697
+ return_dict: bool = True,
698
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
699
+ callback_steps: int = 1,
700
+ ):
701
+ r"""
702
+ Function invoked when calling the pipeline for generation.
703
+ Args:
704
+ prompt (`str` or `List[str]`, *optional*):
705
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
706
+ instead.
707
+ image (`PIL.Image.Image`):
708
+ `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
709
+ be masked out with `mask_image` and repainted according to `prompt`.
710
+ mask_image (`PIL.Image.Image`):
711
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
712
+ repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
713
+ to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
714
+ instead of 3, so the expected shape would be `(B, H, W, 1)`.
715
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
716
+ The height in pixels of the generated image.
717
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
718
+ The width in pixels of the generated image.
719
+ num_inference_steps (`int`, *optional*, defaults to 50):
720
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
721
+ expense of slower inference.
722
+ jump_length (`int`, *optional*, defaults to 10):
723
+ The number of steps taken forward in time before going backward in time for a single jump ("j" in
724
+ RePaint paper). Take a look at Figure 9 and 10 in https://arxiv.org/pdf/2201.09865.pdf.
725
+ jump_n_sample (`int`, *optional*, defaults to 10):
726
+ The number of times we will make forward time jump for a given chosen time sample. Take a look at
727
+ Figure 9 and 10 in https://arxiv.org/pdf/2201.09865.pdf.
728
+ guidance_scale (`float`, *optional*, defaults to 7.5):
729
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
730
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
731
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
732
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
733
+ usually at the expense of lower image quality.
734
+ negative_prompt (`str` or `List[str]`, *optional*):
735
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
736
+ `negative_prompt_embeds`. instead. Ignored when not using guidance (i.e., ignored if `guidance_scale`
737
+ is less than `1`).
738
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
739
+ The number of images to generate per prompt.
740
+ eta (`float`, *optional*, defaults to 0.0):
741
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
742
+ [`schedulers.DDIMScheduler`], will be ignored for others.
743
+ generator (`torch.Generator`, *optional*):
744
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
745
+ to make generation deterministic.
746
+ latents (`torch.FloatTensor`, *optional*):
747
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
748
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
749
+ tensor will ge generated by sampling using the supplied random `generator`.
750
+ prompt_embeds (`torch.FloatTensor`, *optional*):
751
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
752
+ provided, text embeddings will be generated from `prompt` input argument.
753
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
754
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
755
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
756
+ argument.
757
+ output_type (`str`, *optional*, defaults to `"pil"`):
758
+ The output format of the generate image. Choose between
759
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
760
+ return_dict (`bool`, *optional*, defaults to `True`):
761
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
762
+ plain tuple.
763
+ callback (`Callable`, *optional*):
764
+ A function that will be called every `callback_steps` steps during inference. The function will be
765
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
766
+ callback_steps (`int`, *optional*, defaults to 1):
767
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
768
+ called at every step.
769
+ Examples:
770
+ ```py
771
+ >>> import PIL
772
+ >>> import requests
773
+ >>> import torch
774
+ >>> from io import BytesIO
775
+ >>> from diffusers import StableDiffusionPipeline, RePaintScheduler
776
+ >>> def download_image(url):
777
+ ... response = requests.get(url)
778
+ ... return PIL.Image.open(BytesIO(response.content)).convert("RGB")
779
+ >>> base_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/"
780
+ >>> img_url = base_url + "overture-creations-5sI6fQgYIuo.png"
781
+ >>> mask_url = base_url + "overture-creations-5sI6fQgYIuo_mask.png "
782
+ >>> init_image = download_image(img_url).resize((512, 512))
783
+ >>> mask_image = download_image(mask_url).resize((512, 512))
784
+ >>> pipe = DiffusionPipeline.from_pretrained(
785
+ ... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, custom_pipeline="stable_diffusion_repaint",
786
+ ... )
787
+ >>> pipe.scheduler = RePaintScheduler.from_config(pipe.scheduler.config)
788
+ >>> pipe = pipe.to("cuda")
789
+ >>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
790
+ >>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
791
+ ```
792
+ Returns:
793
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
794
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
795
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
796
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
797
+ (nsfw) content, according to the `safety_checker`.
798
+ """
799
+ # 0. Default height and width to unet
800
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
801
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
802
+
803
+ # 1. Check inputs
804
+ self.check_inputs(
805
+ prompt,
806
+ height,
807
+ width,
808
+ callback_steps,
809
+ negative_prompt,
810
+ prompt_embeds,
811
+ negative_prompt_embeds,
812
+ )
813
+
814
+ if image is None:
815
+ raise ValueError("`image` input cannot be undefined.")
816
+
817
+ if mask_image is None:
818
+ raise ValueError("`mask_image` input cannot be undefined.")
819
+
820
+ # 2. Define call parameters
821
+ if prompt is not None and isinstance(prompt, str):
822
+ batch_size = 1
823
+ elif prompt is not None and isinstance(prompt, list):
824
+ batch_size = len(prompt)
825
+ else:
826
+ batch_size = prompt_embeds.shape[0]
827
+
828
+ device = self._execution_device
829
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
830
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
831
+ # corresponds to doing no classifier free guidance.
832
+ do_classifier_free_guidance = guidance_scale > 1.0
833
+
834
+ # 3. Encode input prompt
835
+ prompt_embeds = self._encode_prompt(
836
+ prompt,
837
+ device,
838
+ num_images_per_prompt,
839
+ do_classifier_free_guidance,
840
+ negative_prompt,
841
+ prompt_embeds=prompt_embeds,
842
+ negative_prompt_embeds=negative_prompt_embeds,
843
+ )
844
+
845
+ # 4. Preprocess mask and image
846
+ mask, masked_image = prepare_mask_and_masked_image(image, mask_image)
847
+
848
+ # 5. set timesteps
849
+ self.scheduler.set_timesteps(num_inference_steps, jump_length, jump_n_sample, device)
850
+ self.scheduler.eta = eta
851
+
852
+ timesteps = self.scheduler.timesteps
853
+ # latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
854
+
855
+ # 6. Prepare latent variables
856
+ num_channels_latents = self.vae.config.latent_channels
857
+ latents = self.prepare_latents(
858
+ batch_size * num_images_per_prompt,
859
+ num_channels_latents,
860
+ height,
861
+ width,
862
+ prompt_embeds.dtype,
863
+ device,
864
+ generator,
865
+ latents,
866
+ )
867
+
868
+ # 7. Prepare mask latent variables
869
+ mask, masked_image_latents = self.prepare_mask_latents(
870
+ mask,
871
+ masked_image,
872
+ batch_size * num_images_per_prompt,
873
+ height,
874
+ width,
875
+ prompt_embeds.dtype,
876
+ device,
877
+ generator,
878
+ do_classifier_free_guidance=False, # We do not need duplicate mask and image
879
+ )
880
+
881
+ # 8. Check that sizes of mask, masked image and latents match
882
+ # num_channels_mask = mask.shape[1]
883
+ # num_channels_masked_image = masked_image_latents.shape[1]
884
+ if num_channels_latents != self.unet.config.in_channels:
885
+ raise ValueError(
886
+ f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects"
887
+ f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} "
888
+ f" = Please verify the config of"
889
+ " `pipeline.unet` or your `mask_image` or `image` input."
890
+ )
891
+
892
+ # 9. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
893
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
894
+
895
+ t_last = timesteps[0] + 1
896
+
897
+ # 10. Denoising loop
898
+ with self.progress_bar(total=len(timesteps)) as progress_bar:
899
+ for i, t in enumerate(timesteps):
900
+ if t >= t_last:
901
+ # compute the reverse: x_t-1 -> x_t
902
+ latents = self.scheduler.undo_step(latents, t_last, generator)
903
+ progress_bar.update()
904
+ t_last = t
905
+ continue
906
+
907
+ # expand the latents if we are doing classifier free guidance
908
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
909
+
910
+ # concat latents, mask, masked_image_latents in the channel dimension
911
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
912
+ # latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)
913
+
914
+ # predict the noise residual
915
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=prompt_embeds).sample
916
+
917
+ # perform guidance
918
+ if do_classifier_free_guidance:
919
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
920
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
921
+
922
+ # compute the previous noisy sample x_t -> x_t-1
923
+ latents = self.scheduler.step(
924
+ noise_pred,
925
+ t,
926
+ latents,
927
+ masked_image_latents,
928
+ mask,
929
+ **extra_step_kwargs,
930
+ ).prev_sample
931
+
932
+ # call the callback, if provided
933
+ progress_bar.update()
934
+ if callback is not None and i % callback_steps == 0:
935
+ callback(i, t, latents)
936
+
937
+ t_last = t
938
+
939
+ # 11. Post-processing
940
+ image = self.decode_latents(latents)
941
+
942
+ # 12. Run safety checker
943
+ image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
944
+
945
+ # 13. Convert to PIL
946
+ if output_type == "pil":
947
+ image = self.numpy_to_pil(image)
948
+
949
+ # Offload last model to CPU
950
+ if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
951
+ self.final_offload_hook.offload()
952
+
953
+ if not return_dict:
954
+ return (image, has_nsfw_concept)
955
+
956
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.19.2/stable_diffusion_tensorrt_img2img.py ADDED
@@ -0,0 +1,1055 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright 2023 The HuggingFace Inc. team.
3
+ # SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+
18
+ import gc
19
+ import os
20
+ from collections import OrderedDict
21
+ from copy import copy
22
+ from typing import List, Optional, Union
23
+
24
+ import numpy as np
25
+ import onnx
26
+ import onnx_graphsurgeon as gs
27
+ import PIL
28
+ import tensorrt as trt
29
+ import torch
30
+ from huggingface_hub import snapshot_download
31
+ from onnx import shape_inference
32
+ from polygraphy import cuda
33
+ from polygraphy.backend.common import bytes_from_path
34
+ from polygraphy.backend.onnx.loader import fold_constants
35
+ from polygraphy.backend.trt import (
36
+ CreateConfig,
37
+ Profile,
38
+ engine_from_bytes,
39
+ engine_from_network,
40
+ network_from_onnx_path,
41
+ save_engine,
42
+ )
43
+ from polygraphy.backend.trt import util as trt_util
44
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
45
+
46
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
47
+ from diffusers.pipelines.stable_diffusion import (
48
+ StableDiffusionImg2ImgPipeline,
49
+ StableDiffusionPipelineOutput,
50
+ StableDiffusionSafetyChecker,
51
+ )
52
+ from diffusers.schedulers import DDIMScheduler
53
+ from diffusers.utils import DIFFUSERS_CACHE, logging
54
+
55
+
56
+ """
57
+ Installation instructions
58
+ python3 -m pip install --upgrade transformers diffusers>=0.16.0
59
+ python3 -m pip install --upgrade tensorrt>=8.6.1
60
+ python3 -m pip install --upgrade polygraphy>=0.47.0 onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com
61
+ python3 -m pip install onnxruntime
62
+ """
63
+
64
+ TRT_LOGGER = trt.Logger(trt.Logger.ERROR)
65
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
66
+
67
+ # Map of numpy dtype -> torch dtype
68
+ numpy_to_torch_dtype_dict = {
69
+ np.uint8: torch.uint8,
70
+ np.int8: torch.int8,
71
+ np.int16: torch.int16,
72
+ np.int32: torch.int32,
73
+ np.int64: torch.int64,
74
+ np.float16: torch.float16,
75
+ np.float32: torch.float32,
76
+ np.float64: torch.float64,
77
+ np.complex64: torch.complex64,
78
+ np.complex128: torch.complex128,
79
+ }
80
+ if np.version.full_version >= "1.24.0":
81
+ numpy_to_torch_dtype_dict[np.bool_] = torch.bool
82
+ else:
83
+ numpy_to_torch_dtype_dict[np.bool] = torch.bool
84
+
85
+ # Map of torch dtype -> numpy dtype
86
+ torch_to_numpy_dtype_dict = {value: key for (key, value) in numpy_to_torch_dtype_dict.items()}
87
+
88
+
89
+ def device_view(t):
90
+ return cuda.DeviceView(ptr=t.data_ptr(), shape=t.shape, dtype=torch_to_numpy_dtype_dict[t.dtype])
91
+
92
+
93
+ def preprocess_image(image):
94
+ """
95
+ image: torch.Tensor
96
+ """
97
+ w, h = image.size
98
+ w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
99
+ image = image.resize((w, h))
100
+ image = np.array(image).astype(np.float32) / 255.0
101
+ image = image[None].transpose(0, 3, 1, 2)
102
+ image = torch.from_numpy(image).contiguous()
103
+ return 2.0 * image - 1.0
104
+
105
+
106
+ class Engine:
107
+ def __init__(self, engine_path):
108
+ self.engine_path = engine_path
109
+ self.engine = None
110
+ self.context = None
111
+ self.buffers = OrderedDict()
112
+ self.tensors = OrderedDict()
113
+
114
+ def __del__(self):
115
+ [buf.free() for buf in self.buffers.values() if isinstance(buf, cuda.DeviceArray)]
116
+ del self.engine
117
+ del self.context
118
+ del self.buffers
119
+ del self.tensors
120
+
121
+ def build(
122
+ self,
123
+ onnx_path,
124
+ fp16,
125
+ input_profile=None,
126
+ enable_preview=False,
127
+ enable_all_tactics=False,
128
+ timing_cache=None,
129
+ workspace_size=0,
130
+ ):
131
+ logger.warning(f"Building TensorRT engine for {onnx_path}: {self.engine_path}")
132
+ p = Profile()
133
+ if input_profile:
134
+ for name, dims in input_profile.items():
135
+ assert len(dims) == 3
136
+ p.add(name, min=dims[0], opt=dims[1], max=dims[2])
137
+
138
+ config_kwargs = {}
139
+
140
+ config_kwargs["preview_features"] = [trt.PreviewFeature.DISABLE_EXTERNAL_TACTIC_SOURCES_FOR_CORE_0805]
141
+ if enable_preview:
142
+ # Faster dynamic shapes made optional since it increases engine build time.
143
+ config_kwargs["preview_features"].append(trt.PreviewFeature.FASTER_DYNAMIC_SHAPES_0805)
144
+ if workspace_size > 0:
145
+ config_kwargs["memory_pool_limits"] = {trt.MemoryPoolType.WORKSPACE: workspace_size}
146
+ if not enable_all_tactics:
147
+ config_kwargs["tactic_sources"] = []
148
+
149
+ engine = engine_from_network(
150
+ network_from_onnx_path(onnx_path, flags=[trt.OnnxParserFlag.NATIVE_INSTANCENORM]),
151
+ config=CreateConfig(fp16=fp16, profiles=[p], load_timing_cache=timing_cache, **config_kwargs),
152
+ save_timing_cache=timing_cache,
153
+ )
154
+ save_engine(engine, path=self.engine_path)
155
+
156
+ def load(self):
157
+ logger.warning(f"Loading TensorRT engine: {self.engine_path}")
158
+ self.engine = engine_from_bytes(bytes_from_path(self.engine_path))
159
+
160
+ def activate(self):
161
+ self.context = self.engine.create_execution_context()
162
+
163
+ def allocate_buffers(self, shape_dict=None, device="cuda"):
164
+ for idx in range(trt_util.get_bindings_per_profile(self.engine)):
165
+ binding = self.engine[idx]
166
+ if shape_dict and binding in shape_dict:
167
+ shape = shape_dict[binding]
168
+ else:
169
+ shape = self.engine.get_binding_shape(binding)
170
+ dtype = trt.nptype(self.engine.get_binding_dtype(binding))
171
+ if self.engine.binding_is_input(binding):
172
+ self.context.set_binding_shape(idx, shape)
173
+ tensor = torch.empty(tuple(shape), dtype=numpy_to_torch_dtype_dict[dtype]).to(device=device)
174
+ self.tensors[binding] = tensor
175
+ self.buffers[binding] = cuda.DeviceView(ptr=tensor.data_ptr(), shape=shape, dtype=dtype)
176
+
177
+ def infer(self, feed_dict, stream):
178
+ start_binding, end_binding = trt_util.get_active_profile_bindings(self.context)
179
+ # shallow copy of ordered dict
180
+ device_buffers = copy(self.buffers)
181
+ for name, buf in feed_dict.items():
182
+ assert isinstance(buf, cuda.DeviceView)
183
+ device_buffers[name] = buf
184
+ bindings = [0] * start_binding + [buf.ptr for buf in device_buffers.values()]
185
+ noerror = self.context.execute_async_v2(bindings=bindings, stream_handle=stream.ptr)
186
+ if not noerror:
187
+ raise ValueError("ERROR: inference failed.")
188
+
189
+ return self.tensors
190
+
191
+
192
+ class Optimizer:
193
+ def __init__(self, onnx_graph):
194
+ self.graph = gs.import_onnx(onnx_graph)
195
+
196
+ def cleanup(self, return_onnx=False):
197
+ self.graph.cleanup().toposort()
198
+ if return_onnx:
199
+ return gs.export_onnx(self.graph)
200
+
201
+ def select_outputs(self, keep, names=None):
202
+ self.graph.outputs = [self.graph.outputs[o] for o in keep]
203
+ if names:
204
+ for i, name in enumerate(names):
205
+ self.graph.outputs[i].name = name
206
+
207
+ def fold_constants(self, return_onnx=False):
208
+ onnx_graph = fold_constants(gs.export_onnx(self.graph), allow_onnxruntime_shape_inference=True)
209
+ self.graph = gs.import_onnx(onnx_graph)
210
+ if return_onnx:
211
+ return onnx_graph
212
+
213
+ def infer_shapes(self, return_onnx=False):
214
+ onnx_graph = gs.export_onnx(self.graph)
215
+ if onnx_graph.ByteSize() > 2147483648:
216
+ raise TypeError("ERROR: model size exceeds supported 2GB limit")
217
+ else:
218
+ onnx_graph = shape_inference.infer_shapes(onnx_graph)
219
+
220
+ self.graph = gs.import_onnx(onnx_graph)
221
+ if return_onnx:
222
+ return onnx_graph
223
+
224
+
225
+ class BaseModel:
226
+ def __init__(self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77):
227
+ self.model = model
228
+ self.name = "SD Model"
229
+ self.fp16 = fp16
230
+ self.device = device
231
+
232
+ self.min_batch = 1
233
+ self.max_batch = max_batch_size
234
+ self.min_image_shape = 256 # min image resolution: 256x256
235
+ self.max_image_shape = 1024 # max image resolution: 1024x1024
236
+ self.min_latent_shape = self.min_image_shape // 8
237
+ self.max_latent_shape = self.max_image_shape // 8
238
+
239
+ self.embedding_dim = embedding_dim
240
+ self.text_maxlen = text_maxlen
241
+
242
+ def get_model(self):
243
+ return self.model
244
+
245
+ def get_input_names(self):
246
+ pass
247
+
248
+ def get_output_names(self):
249
+ pass
250
+
251
+ def get_dynamic_axes(self):
252
+ return None
253
+
254
+ def get_sample_input(self, batch_size, image_height, image_width):
255
+ pass
256
+
257
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
258
+ return None
259
+
260
+ def get_shape_dict(self, batch_size, image_height, image_width):
261
+ return None
262
+
263
+ def optimize(self, onnx_graph):
264
+ opt = Optimizer(onnx_graph)
265
+ opt.cleanup()
266
+ opt.fold_constants()
267
+ opt.infer_shapes()
268
+ onnx_opt_graph = opt.cleanup(return_onnx=True)
269
+ return onnx_opt_graph
270
+
271
+ def check_dims(self, batch_size, image_height, image_width):
272
+ assert batch_size >= self.min_batch and batch_size <= self.max_batch
273
+ assert image_height % 8 == 0 or image_width % 8 == 0
274
+ latent_height = image_height // 8
275
+ latent_width = image_width // 8
276
+ assert latent_height >= self.min_latent_shape and latent_height <= self.max_latent_shape
277
+ assert latent_width >= self.min_latent_shape and latent_width <= self.max_latent_shape
278
+ return (latent_height, latent_width)
279
+
280
+ def get_minmax_dims(self, batch_size, image_height, image_width, static_batch, static_shape):
281
+ min_batch = batch_size if static_batch else self.min_batch
282
+ max_batch = batch_size if static_batch else self.max_batch
283
+ latent_height = image_height // 8
284
+ latent_width = image_width // 8
285
+ min_image_height = image_height if static_shape else self.min_image_shape
286
+ max_image_height = image_height if static_shape else self.max_image_shape
287
+ min_image_width = image_width if static_shape else self.min_image_shape
288
+ max_image_width = image_width if static_shape else self.max_image_shape
289
+ min_latent_height = latent_height if static_shape else self.min_latent_shape
290
+ max_latent_height = latent_height if static_shape else self.max_latent_shape
291
+ min_latent_width = latent_width if static_shape else self.min_latent_shape
292
+ max_latent_width = latent_width if static_shape else self.max_latent_shape
293
+ return (
294
+ min_batch,
295
+ max_batch,
296
+ min_image_height,
297
+ max_image_height,
298
+ min_image_width,
299
+ max_image_width,
300
+ min_latent_height,
301
+ max_latent_height,
302
+ min_latent_width,
303
+ max_latent_width,
304
+ )
305
+
306
+
307
+ def getOnnxPath(model_name, onnx_dir, opt=True):
308
+ return os.path.join(onnx_dir, model_name + (".opt" if opt else "") + ".onnx")
309
+
310
+
311
+ def getEnginePath(model_name, engine_dir):
312
+ return os.path.join(engine_dir, model_name + ".plan")
313
+
314
+
315
+ def build_engines(
316
+ models: dict,
317
+ engine_dir,
318
+ onnx_dir,
319
+ onnx_opset,
320
+ opt_image_height,
321
+ opt_image_width,
322
+ opt_batch_size=1,
323
+ force_engine_rebuild=False,
324
+ static_batch=False,
325
+ static_shape=True,
326
+ enable_preview=False,
327
+ enable_all_tactics=False,
328
+ timing_cache=None,
329
+ max_workspace_size=0,
330
+ ):
331
+ built_engines = {}
332
+ if not os.path.isdir(onnx_dir):
333
+ os.makedirs(onnx_dir)
334
+ if not os.path.isdir(engine_dir):
335
+ os.makedirs(engine_dir)
336
+
337
+ # Export models to ONNX
338
+ for model_name, model_obj in models.items():
339
+ engine_path = getEnginePath(model_name, engine_dir)
340
+ if force_engine_rebuild or not os.path.exists(engine_path):
341
+ logger.warning("Building Engines...")
342
+ logger.warning("Engine build can take a while to complete")
343
+ onnx_path = getOnnxPath(model_name, onnx_dir, opt=False)
344
+ onnx_opt_path = getOnnxPath(model_name, onnx_dir)
345
+ if force_engine_rebuild or not os.path.exists(onnx_opt_path):
346
+ if force_engine_rebuild or not os.path.exists(onnx_path):
347
+ logger.warning(f"Exporting model: {onnx_path}")
348
+ model = model_obj.get_model()
349
+ with torch.inference_mode(), torch.autocast("cuda"):
350
+ inputs = model_obj.get_sample_input(opt_batch_size, opt_image_height, opt_image_width)
351
+ torch.onnx.export(
352
+ model,
353
+ inputs,
354
+ onnx_path,
355
+ export_params=True,
356
+ opset_version=onnx_opset,
357
+ do_constant_folding=True,
358
+ input_names=model_obj.get_input_names(),
359
+ output_names=model_obj.get_output_names(),
360
+ dynamic_axes=model_obj.get_dynamic_axes(),
361
+ )
362
+ del model
363
+ torch.cuda.empty_cache()
364
+ gc.collect()
365
+ else:
366
+ logger.warning(f"Found cached model: {onnx_path}")
367
+
368
+ # Optimize onnx
369
+ if force_engine_rebuild or not os.path.exists(onnx_opt_path):
370
+ logger.warning(f"Generating optimizing model: {onnx_opt_path}")
371
+ onnx_opt_graph = model_obj.optimize(onnx.load(onnx_path))
372
+ onnx.save(onnx_opt_graph, onnx_opt_path)
373
+ else:
374
+ logger.warning(f"Found cached optimized model: {onnx_opt_path} ")
375
+
376
+ # Build TensorRT engines
377
+ for model_name, model_obj in models.items():
378
+ engine_path = getEnginePath(model_name, engine_dir)
379
+ engine = Engine(engine_path)
380
+ onnx_path = getOnnxPath(model_name, onnx_dir, opt=False)
381
+ onnx_opt_path = getOnnxPath(model_name, onnx_dir)
382
+
383
+ if force_engine_rebuild or not os.path.exists(engine.engine_path):
384
+ engine.build(
385
+ onnx_opt_path,
386
+ fp16=True,
387
+ input_profile=model_obj.get_input_profile(
388
+ opt_batch_size,
389
+ opt_image_height,
390
+ opt_image_width,
391
+ static_batch=static_batch,
392
+ static_shape=static_shape,
393
+ ),
394
+ enable_preview=enable_preview,
395
+ timing_cache=timing_cache,
396
+ workspace_size=max_workspace_size,
397
+ )
398
+ built_engines[model_name] = engine
399
+
400
+ # Load and activate TensorRT engines
401
+ for model_name, model_obj in models.items():
402
+ engine = built_engines[model_name]
403
+ engine.load()
404
+ engine.activate()
405
+
406
+ return built_engines
407
+
408
+
409
+ def runEngine(engine, feed_dict, stream):
410
+ return engine.infer(feed_dict, stream)
411
+
412
+
413
+ class CLIP(BaseModel):
414
+ def __init__(self, model, device, max_batch_size, embedding_dim):
415
+ super(CLIP, self).__init__(
416
+ model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
417
+ )
418
+ self.name = "CLIP"
419
+
420
+ def get_input_names(self):
421
+ return ["input_ids"]
422
+
423
+ def get_output_names(self):
424
+ return ["text_embeddings", "pooler_output"]
425
+
426
+ def get_dynamic_axes(self):
427
+ return {"input_ids": {0: "B"}, "text_embeddings": {0: "B"}}
428
+
429
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
430
+ self.check_dims(batch_size, image_height, image_width)
431
+ min_batch, max_batch, _, _, _, _, _, _, _, _ = self.get_minmax_dims(
432
+ batch_size, image_height, image_width, static_batch, static_shape
433
+ )
434
+ return {
435
+ "input_ids": [(min_batch, self.text_maxlen), (batch_size, self.text_maxlen), (max_batch, self.text_maxlen)]
436
+ }
437
+
438
+ def get_shape_dict(self, batch_size, image_height, image_width):
439
+ self.check_dims(batch_size, image_height, image_width)
440
+ return {
441
+ "input_ids": (batch_size, self.text_maxlen),
442
+ "text_embeddings": (batch_size, self.text_maxlen, self.embedding_dim),
443
+ }
444
+
445
+ def get_sample_input(self, batch_size, image_height, image_width):
446
+ self.check_dims(batch_size, image_height, image_width)
447
+ return torch.zeros(batch_size, self.text_maxlen, dtype=torch.int32, device=self.device)
448
+
449
+ def optimize(self, onnx_graph):
450
+ opt = Optimizer(onnx_graph)
451
+ opt.select_outputs([0]) # delete graph output#1
452
+ opt.cleanup()
453
+ opt.fold_constants()
454
+ opt.infer_shapes()
455
+ opt.select_outputs([0], names=["text_embeddings"]) # rename network output
456
+ opt_onnx_graph = opt.cleanup(return_onnx=True)
457
+ return opt_onnx_graph
458
+
459
+
460
+ def make_CLIP(model, device, max_batch_size, embedding_dim, inpaint=False):
461
+ return CLIP(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
462
+
463
+
464
+ class UNet(BaseModel):
465
+ def __init__(
466
+ self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77, unet_dim=4
467
+ ):
468
+ super(UNet, self).__init__(
469
+ model=model,
470
+ fp16=fp16,
471
+ device=device,
472
+ max_batch_size=max_batch_size,
473
+ embedding_dim=embedding_dim,
474
+ text_maxlen=text_maxlen,
475
+ )
476
+ self.unet_dim = unet_dim
477
+ self.name = "UNet"
478
+
479
+ def get_input_names(self):
480
+ return ["sample", "timestep", "encoder_hidden_states"]
481
+
482
+ def get_output_names(self):
483
+ return ["latent"]
484
+
485
+ def get_dynamic_axes(self):
486
+ return {
487
+ "sample": {0: "2B", 2: "H", 3: "W"},
488
+ "encoder_hidden_states": {0: "2B"},
489
+ "latent": {0: "2B", 2: "H", 3: "W"},
490
+ }
491
+
492
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
493
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
494
+ (
495
+ min_batch,
496
+ max_batch,
497
+ _,
498
+ _,
499
+ _,
500
+ _,
501
+ min_latent_height,
502
+ max_latent_height,
503
+ min_latent_width,
504
+ max_latent_width,
505
+ ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
506
+ return {
507
+ "sample": [
508
+ (2 * min_batch, self.unet_dim, min_latent_height, min_latent_width),
509
+ (2 * batch_size, self.unet_dim, latent_height, latent_width),
510
+ (2 * max_batch, self.unet_dim, max_latent_height, max_latent_width),
511
+ ],
512
+ "encoder_hidden_states": [
513
+ (2 * min_batch, self.text_maxlen, self.embedding_dim),
514
+ (2 * batch_size, self.text_maxlen, self.embedding_dim),
515
+ (2 * max_batch, self.text_maxlen, self.embedding_dim),
516
+ ],
517
+ }
518
+
519
+ def get_shape_dict(self, batch_size, image_height, image_width):
520
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
521
+ return {
522
+ "sample": (2 * batch_size, self.unet_dim, latent_height, latent_width),
523
+ "encoder_hidden_states": (2 * batch_size, self.text_maxlen, self.embedding_dim),
524
+ "latent": (2 * batch_size, 4, latent_height, latent_width),
525
+ }
526
+
527
+ def get_sample_input(self, batch_size, image_height, image_width):
528
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
529
+ dtype = torch.float16 if self.fp16 else torch.float32
530
+ return (
531
+ torch.randn(
532
+ 2 * batch_size, self.unet_dim, latent_height, latent_width, dtype=torch.float32, device=self.device
533
+ ),
534
+ torch.tensor([1.0], dtype=torch.float32, device=self.device),
535
+ torch.randn(2 * batch_size, self.text_maxlen, self.embedding_dim, dtype=dtype, device=self.device),
536
+ )
537
+
538
+
539
+ def make_UNet(model, device, max_batch_size, embedding_dim, inpaint=False):
540
+ return UNet(
541
+ model,
542
+ fp16=True,
543
+ device=device,
544
+ max_batch_size=max_batch_size,
545
+ embedding_dim=embedding_dim,
546
+ unet_dim=(9 if inpaint else 4),
547
+ )
548
+
549
+
550
+ class VAE(BaseModel):
551
+ def __init__(self, model, device, max_batch_size, embedding_dim):
552
+ super(VAE, self).__init__(
553
+ model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
554
+ )
555
+ self.name = "VAE decoder"
556
+
557
+ def get_input_names(self):
558
+ return ["latent"]
559
+
560
+ def get_output_names(self):
561
+ return ["images"]
562
+
563
+ def get_dynamic_axes(self):
564
+ return {"latent": {0: "B", 2: "H", 3: "W"}, "images": {0: "B", 2: "8H", 3: "8W"}}
565
+
566
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
567
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
568
+ (
569
+ min_batch,
570
+ max_batch,
571
+ _,
572
+ _,
573
+ _,
574
+ _,
575
+ min_latent_height,
576
+ max_latent_height,
577
+ min_latent_width,
578
+ max_latent_width,
579
+ ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
580
+ return {
581
+ "latent": [
582
+ (min_batch, 4, min_latent_height, min_latent_width),
583
+ (batch_size, 4, latent_height, latent_width),
584
+ (max_batch, 4, max_latent_height, max_latent_width),
585
+ ]
586
+ }
587
+
588
+ def get_shape_dict(self, batch_size, image_height, image_width):
589
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
590
+ return {
591
+ "latent": (batch_size, 4, latent_height, latent_width),
592
+ "images": (batch_size, 3, image_height, image_width),
593
+ }
594
+
595
+ def get_sample_input(self, batch_size, image_height, image_width):
596
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
597
+ return torch.randn(batch_size, 4, latent_height, latent_width, dtype=torch.float32, device=self.device)
598
+
599
+
600
+ def make_VAE(model, device, max_batch_size, embedding_dim, inpaint=False):
601
+ return VAE(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
602
+
603
+
604
+ class TorchVAEEncoder(torch.nn.Module):
605
+ def __init__(self, model):
606
+ super().__init__()
607
+ self.vae_encoder = model
608
+
609
+ def forward(self, x):
610
+ return self.vae_encoder.encode(x).latent_dist.sample()
611
+
612
+
613
+ class VAEEncoder(BaseModel):
614
+ def __init__(self, model, device, max_batch_size, embedding_dim):
615
+ super(VAEEncoder, self).__init__(
616
+ model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
617
+ )
618
+ self.name = "VAE encoder"
619
+
620
+ def get_model(self):
621
+ vae_encoder = TorchVAEEncoder(self.model)
622
+ return vae_encoder
623
+
624
+ def get_input_names(self):
625
+ return ["images"]
626
+
627
+ def get_output_names(self):
628
+ return ["latent"]
629
+
630
+ def get_dynamic_axes(self):
631
+ return {"images": {0: "B", 2: "8H", 3: "8W"}, "latent": {0: "B", 2: "H", 3: "W"}}
632
+
633
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
634
+ assert batch_size >= self.min_batch and batch_size <= self.max_batch
635
+ min_batch = batch_size if static_batch else self.min_batch
636
+ max_batch = batch_size if static_batch else self.max_batch
637
+ self.check_dims(batch_size, image_height, image_width)
638
+ (
639
+ min_batch,
640
+ max_batch,
641
+ min_image_height,
642
+ max_image_height,
643
+ min_image_width,
644
+ max_image_width,
645
+ _,
646
+ _,
647
+ _,
648
+ _,
649
+ ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
650
+
651
+ return {
652
+ "images": [
653
+ (min_batch, 3, min_image_height, min_image_width),
654
+ (batch_size, 3, image_height, image_width),
655
+ (max_batch, 3, max_image_height, max_image_width),
656
+ ]
657
+ }
658
+
659
+ def get_shape_dict(self, batch_size, image_height, image_width):
660
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
661
+ return {
662
+ "images": (batch_size, 3, image_height, image_width),
663
+ "latent": (batch_size, 4, latent_height, latent_width),
664
+ }
665
+
666
+ def get_sample_input(self, batch_size, image_height, image_width):
667
+ self.check_dims(batch_size, image_height, image_width)
668
+ return torch.randn(batch_size, 3, image_height, image_width, dtype=torch.float32, device=self.device)
669
+
670
+
671
+ def make_VAEEncoder(model, device, max_batch_size, embedding_dim, inpaint=False):
672
+ return VAEEncoder(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
673
+
674
+
675
+ class TensorRTStableDiffusionImg2ImgPipeline(StableDiffusionImg2ImgPipeline):
676
+ r"""
677
+ Pipeline for image-to-image generation using TensorRT accelerated Stable Diffusion.
678
+
679
+ This model inherits from [`StableDiffusionImg2ImgPipeline`]. Check the superclass documentation for the generic methods the
680
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
681
+
682
+ Args:
683
+ vae ([`AutoencoderKL`]):
684
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
685
+ text_encoder ([`CLIPTextModel`]):
686
+ Frozen text-encoder. Stable Diffusion uses the text portion of
687
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
688
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
689
+ tokenizer (`CLIPTokenizer`):
690
+ Tokenizer of class
691
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
692
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
693
+ scheduler ([`SchedulerMixin`]):
694
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
695
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
696
+ safety_checker ([`StableDiffusionSafetyChecker`]):
697
+ Classification module that estimates whether generated images could be considered offensive or harmful.
698
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
699
+ feature_extractor ([`CLIPFeatureExtractor`]):
700
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
701
+ """
702
+
703
+ def __init__(
704
+ self,
705
+ vae: AutoencoderKL,
706
+ text_encoder: CLIPTextModel,
707
+ tokenizer: CLIPTokenizer,
708
+ unet: UNet2DConditionModel,
709
+ scheduler: DDIMScheduler,
710
+ safety_checker: StableDiffusionSafetyChecker,
711
+ feature_extractor: CLIPFeatureExtractor,
712
+ requires_safety_checker: bool = True,
713
+ stages=["clip", "unet", "vae", "vae_encoder"],
714
+ image_height: int = 512,
715
+ image_width: int = 512,
716
+ max_batch_size: int = 16,
717
+ # ONNX export parameters
718
+ onnx_opset: int = 17,
719
+ onnx_dir: str = "onnx",
720
+ # TensorRT engine build parameters
721
+ engine_dir: str = "engine",
722
+ build_preview_features: bool = True,
723
+ force_engine_rebuild: bool = False,
724
+ timing_cache: str = "timing_cache",
725
+ ):
726
+ super().__init__(
727
+ vae, text_encoder, tokenizer, unet, scheduler, safety_checker, feature_extractor, requires_safety_checker
728
+ )
729
+
730
+ self.vae.forward = self.vae.decode
731
+
732
+ self.stages = stages
733
+ self.image_height, self.image_width = image_height, image_width
734
+ self.inpaint = False
735
+ self.onnx_opset = onnx_opset
736
+ self.onnx_dir = onnx_dir
737
+ self.engine_dir = engine_dir
738
+ self.force_engine_rebuild = force_engine_rebuild
739
+ self.timing_cache = timing_cache
740
+ self.build_static_batch = False
741
+ self.build_dynamic_shape = False
742
+ self.build_preview_features = build_preview_features
743
+
744
+ self.max_batch_size = max_batch_size
745
+ # TODO: Restrict batch size to 4 for larger image dimensions as a WAR for TensorRT limitation.
746
+ if self.build_dynamic_shape or self.image_height > 512 or self.image_width > 512:
747
+ self.max_batch_size = 4
748
+
749
+ self.stream = None # loaded in loadResources()
750
+ self.models = {} # loaded in __loadModels()
751
+ self.engine = {} # loaded in build_engines()
752
+
753
+ def __loadModels(self):
754
+ # Load pipeline models
755
+ self.embedding_dim = self.text_encoder.config.hidden_size
756
+ models_args = {
757
+ "device": self.torch_device,
758
+ "max_batch_size": self.max_batch_size,
759
+ "embedding_dim": self.embedding_dim,
760
+ "inpaint": self.inpaint,
761
+ }
762
+ if "clip" in self.stages:
763
+ self.models["clip"] = make_CLIP(self.text_encoder, **models_args)
764
+ if "unet" in self.stages:
765
+ self.models["unet"] = make_UNet(self.unet, **models_args)
766
+ if "vae" in self.stages:
767
+ self.models["vae"] = make_VAE(self.vae, **models_args)
768
+ if "vae_encoder" in self.stages:
769
+ self.models["vae_encoder"] = make_VAEEncoder(self.vae, **models_args)
770
+
771
+ @classmethod
772
+ def set_cached_folder(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs):
773
+ cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
774
+ resume_download = kwargs.pop("resume_download", False)
775
+ proxies = kwargs.pop("proxies", None)
776
+ local_files_only = kwargs.pop("local_files_only", False)
777
+ use_auth_token = kwargs.pop("use_auth_token", None)
778
+ revision = kwargs.pop("revision", None)
779
+
780
+ cls.cached_folder = (
781
+ pretrained_model_name_or_path
782
+ if os.path.isdir(pretrained_model_name_or_path)
783
+ else snapshot_download(
784
+ pretrained_model_name_or_path,
785
+ cache_dir=cache_dir,
786
+ resume_download=resume_download,
787
+ proxies=proxies,
788
+ local_files_only=local_files_only,
789
+ use_auth_token=use_auth_token,
790
+ revision=revision,
791
+ )
792
+ )
793
+
794
+ def to(self, torch_device: Optional[Union[str, torch.device]] = None, silence_dtype_warnings: bool = False):
795
+ super().to(torch_device, silence_dtype_warnings=silence_dtype_warnings)
796
+
797
+ self.onnx_dir = os.path.join(self.cached_folder, self.onnx_dir)
798
+ self.engine_dir = os.path.join(self.cached_folder, self.engine_dir)
799
+ self.timing_cache = os.path.join(self.cached_folder, self.timing_cache)
800
+
801
+ # set device
802
+ self.torch_device = self._execution_device
803
+ logger.warning(f"Running inference on device: {self.torch_device}")
804
+
805
+ # load models
806
+ self.__loadModels()
807
+
808
+ # build engines
809
+ self.engine = build_engines(
810
+ self.models,
811
+ self.engine_dir,
812
+ self.onnx_dir,
813
+ self.onnx_opset,
814
+ opt_image_height=self.image_height,
815
+ opt_image_width=self.image_width,
816
+ force_engine_rebuild=self.force_engine_rebuild,
817
+ static_batch=self.build_static_batch,
818
+ static_shape=not self.build_dynamic_shape,
819
+ enable_preview=self.build_preview_features,
820
+ timing_cache=self.timing_cache,
821
+ )
822
+
823
+ return self
824
+
825
+ def __initialize_timesteps(self, timesteps, strength):
826
+ self.scheduler.set_timesteps(timesteps)
827
+ offset = self.scheduler.steps_offset if hasattr(self.scheduler, "steps_offset") else 0
828
+ init_timestep = int(timesteps * strength) + offset
829
+ init_timestep = min(init_timestep, timesteps)
830
+ t_start = max(timesteps - init_timestep + offset, 0)
831
+ timesteps = self.scheduler.timesteps[t_start:].to(self.torch_device)
832
+ return timesteps, t_start
833
+
834
+ def __preprocess_images(self, batch_size, images=()):
835
+ init_images = []
836
+ for image in images:
837
+ image = image.to(self.torch_device).float()
838
+ image = image.repeat(batch_size, 1, 1, 1)
839
+ init_images.append(image)
840
+ return tuple(init_images)
841
+
842
+ def __encode_image(self, init_image):
843
+ init_latents = runEngine(self.engine["vae_encoder"], {"images": device_view(init_image)}, self.stream)[
844
+ "latent"
845
+ ]
846
+ init_latents = 0.18215 * init_latents
847
+ return init_latents
848
+
849
+ def __encode_prompt(self, prompt, negative_prompt):
850
+ r"""
851
+ Encodes the prompt into text encoder hidden states.
852
+
853
+ Args:
854
+ prompt (`str` or `List[str]`, *optional*):
855
+ prompt to be encoded
856
+ negative_prompt (`str` or `List[str]`, *optional*):
857
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
858
+ `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
859
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
860
+ """
861
+ # Tokenize prompt
862
+ text_input_ids = (
863
+ self.tokenizer(
864
+ prompt,
865
+ padding="max_length",
866
+ max_length=self.tokenizer.model_max_length,
867
+ truncation=True,
868
+ return_tensors="pt",
869
+ )
870
+ .input_ids.type(torch.int32)
871
+ .to(self.torch_device)
872
+ )
873
+
874
+ text_input_ids_inp = device_view(text_input_ids)
875
+ # NOTE: output tensor for CLIP must be cloned because it will be overwritten when called again for negative prompt
876
+ text_embeddings = runEngine(self.engine["clip"], {"input_ids": text_input_ids_inp}, self.stream)[
877
+ "text_embeddings"
878
+ ].clone()
879
+
880
+ # Tokenize negative prompt
881
+ uncond_input_ids = (
882
+ self.tokenizer(
883
+ negative_prompt,
884
+ padding="max_length",
885
+ max_length=self.tokenizer.model_max_length,
886
+ truncation=True,
887
+ return_tensors="pt",
888
+ )
889
+ .input_ids.type(torch.int32)
890
+ .to(self.torch_device)
891
+ )
892
+ uncond_input_ids_inp = device_view(uncond_input_ids)
893
+ uncond_embeddings = runEngine(self.engine["clip"], {"input_ids": uncond_input_ids_inp}, self.stream)[
894
+ "text_embeddings"
895
+ ]
896
+
897
+ # Concatenate the unconditional and text embeddings into a single batch to avoid doing two forward passes for classifier free guidance
898
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings]).to(dtype=torch.float16)
899
+
900
+ return text_embeddings
901
+
902
+ def __denoise_latent(
903
+ self, latents, text_embeddings, timesteps=None, step_offset=0, mask=None, masked_image_latents=None
904
+ ):
905
+ if not isinstance(timesteps, torch.Tensor):
906
+ timesteps = self.scheduler.timesteps
907
+ for step_index, timestep in enumerate(timesteps):
908
+ # Expand the latents if we are doing classifier free guidance
909
+ latent_model_input = torch.cat([latents] * 2)
910
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, timestep)
911
+ if isinstance(mask, torch.Tensor):
912
+ latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)
913
+
914
+ # Predict the noise residual
915
+ timestep_float = timestep.float() if timestep.dtype != torch.float32 else timestep
916
+
917
+ sample_inp = device_view(latent_model_input)
918
+ timestep_inp = device_view(timestep_float)
919
+ embeddings_inp = device_view(text_embeddings)
920
+ noise_pred = runEngine(
921
+ self.engine["unet"],
922
+ {"sample": sample_inp, "timestep": timestep_inp, "encoder_hidden_states": embeddings_inp},
923
+ self.stream,
924
+ )["latent"]
925
+
926
+ # Perform guidance
927
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
928
+ noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond)
929
+
930
+ latents = self.scheduler.step(noise_pred, timestep, latents).prev_sample
931
+
932
+ latents = 1.0 / 0.18215 * latents
933
+ return latents
934
+
935
+ def __decode_latent(self, latents):
936
+ images = runEngine(self.engine["vae"], {"latent": device_view(latents)}, self.stream)["images"]
937
+ images = (images / 2 + 0.5).clamp(0, 1)
938
+ return images.cpu().permute(0, 2, 3, 1).float().numpy()
939
+
940
+ def __loadResources(self, image_height, image_width, batch_size):
941
+ self.stream = cuda.Stream()
942
+
943
+ # Allocate buffers for TensorRT engine bindings
944
+ for model_name, obj in self.models.items():
945
+ self.engine[model_name].allocate_buffers(
946
+ shape_dict=obj.get_shape_dict(batch_size, image_height, image_width), device=self.torch_device
947
+ )
948
+
949
+ @torch.no_grad()
950
+ def __call__(
951
+ self,
952
+ prompt: Union[str, List[str]] = None,
953
+ image: Union[torch.FloatTensor, PIL.Image.Image] = None,
954
+ strength: float = 0.8,
955
+ num_inference_steps: int = 50,
956
+ guidance_scale: float = 7.5,
957
+ negative_prompt: Optional[Union[str, List[str]]] = None,
958
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
959
+ ):
960
+ r"""
961
+ Function invoked when calling the pipeline for generation.
962
+
963
+ Args:
964
+ prompt (`str` or `List[str]`, *optional*):
965
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
966
+ instead.
967
+ image (`PIL.Image.Image`):
968
+ `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
969
+ be masked out with `mask_image` and repainted according to `prompt`.
970
+ strength (`float`, *optional*, defaults to 0.8):
971
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
972
+ will be used as a starting point, adding more noise to it the larger the `strength`. The number of
973
+ denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
974
+ be maximum and the denoising process will run for the full number of iterations specified in
975
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
976
+ num_inference_steps (`int`, *optional*, defaults to 50):
977
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
978
+ expense of slower inference.
979
+ guidance_scale (`float`, *optional*, defaults to 7.5):
980
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
981
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
982
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
983
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
984
+ usually at the expense of lower image quality.
985
+ negative_prompt (`str` or `List[str]`, *optional*):
986
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
987
+ `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
988
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
989
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
990
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
991
+ to make generation deterministic.
992
+
993
+ """
994
+ self.generator = generator
995
+ self.denoising_steps = num_inference_steps
996
+ self.guidance_scale = guidance_scale
997
+
998
+ # Pre-compute latent input scales and linear multistep coefficients
999
+ self.scheduler.set_timesteps(self.denoising_steps, device=self.torch_device)
1000
+
1001
+ # Define call parameters
1002
+ if prompt is not None and isinstance(prompt, str):
1003
+ batch_size = 1
1004
+ prompt = [prompt]
1005
+ elif prompt is not None and isinstance(prompt, list):
1006
+ batch_size = len(prompt)
1007
+ else:
1008
+ raise ValueError(f"Expected prompt to be of type list or str but got {type(prompt)}")
1009
+
1010
+ if negative_prompt is None:
1011
+ negative_prompt = [""] * batch_size
1012
+
1013
+ if negative_prompt is not None and isinstance(negative_prompt, str):
1014
+ negative_prompt = [negative_prompt]
1015
+
1016
+ assert len(prompt) == len(negative_prompt)
1017
+
1018
+ if batch_size > self.max_batch_size:
1019
+ raise ValueError(
1020
+ f"Batch size {len(prompt)} is larger than allowed {self.max_batch_size}. If dynamic shape is used, then maximum batch size is 4"
1021
+ )
1022
+
1023
+ # load resources
1024
+ self.__loadResources(self.image_height, self.image_width, batch_size)
1025
+
1026
+ with torch.inference_mode(), torch.autocast("cuda"), trt.Runtime(TRT_LOGGER):
1027
+ # Initialize timesteps
1028
+ timesteps, t_start = self.__initialize_timesteps(self.denoising_steps, strength)
1029
+ latent_timestep = timesteps[:1].repeat(batch_size)
1030
+
1031
+ # Pre-process input image
1032
+ if isinstance(image, PIL.Image.Image):
1033
+ image = preprocess_image(image)
1034
+ init_image = self.__preprocess_images(batch_size, (image,))[0]
1035
+
1036
+ # VAE encode init image
1037
+ init_latents = self.__encode_image(init_image)
1038
+
1039
+ # Add noise to latents using timesteps
1040
+ noise = torch.randn(
1041
+ init_latents.shape, generator=self.generator, device=self.torch_device, dtype=torch.float32
1042
+ )
1043
+ latents = self.scheduler.add_noise(init_latents, noise, latent_timestep)
1044
+
1045
+ # CLIP text encoder
1046
+ text_embeddings = self.__encode_prompt(prompt, negative_prompt)
1047
+
1048
+ # UNet denoiser
1049
+ latents = self.__denoise_latent(latents, text_embeddings, timesteps=timesteps, step_offset=t_start)
1050
+
1051
+ # VAE decode latent
1052
+ images = self.__decode_latent(latents)
1053
+
1054
+ images = self.numpy_to_pil(images)
1055
+ return StableDiffusionPipelineOutput(images=images, nsfw_content_detected=None)
v0.19.2/stable_diffusion_tensorrt_inpaint.py ADDED
@@ -0,0 +1,1088 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright 2023 The HuggingFace Inc. team.
3
+ # SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+
18
+ import gc
19
+ import os
20
+ from collections import OrderedDict
21
+ from copy import copy
22
+ from typing import List, Optional, Union
23
+
24
+ import numpy as np
25
+ import onnx
26
+ import onnx_graphsurgeon as gs
27
+ import PIL
28
+ import tensorrt as trt
29
+ import torch
30
+ from huggingface_hub import snapshot_download
31
+ from onnx import shape_inference
32
+ from polygraphy import cuda
33
+ from polygraphy.backend.common import bytes_from_path
34
+ from polygraphy.backend.onnx.loader import fold_constants
35
+ from polygraphy.backend.trt import (
36
+ CreateConfig,
37
+ Profile,
38
+ engine_from_bytes,
39
+ engine_from_network,
40
+ network_from_onnx_path,
41
+ save_engine,
42
+ )
43
+ from polygraphy.backend.trt import util as trt_util
44
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
45
+
46
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
47
+ from diffusers.pipelines.stable_diffusion import (
48
+ StableDiffusionInpaintPipeline,
49
+ StableDiffusionPipelineOutput,
50
+ StableDiffusionSafetyChecker,
51
+ )
52
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inpaint import prepare_mask_and_masked_image
53
+ from diffusers.schedulers import DDIMScheduler
54
+ from diffusers.utils import DIFFUSERS_CACHE, logging
55
+
56
+
57
+ """
58
+ Installation instructions
59
+ python3 -m pip install --upgrade transformers diffusers>=0.16.0
60
+ python3 -m pip install --upgrade tensorrt>=8.6.1
61
+ python3 -m pip install --upgrade polygraphy>=0.47.0 onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com
62
+ python3 -m pip install onnxruntime
63
+ """
64
+
65
+ TRT_LOGGER = trt.Logger(trt.Logger.ERROR)
66
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
67
+
68
+ # Map of numpy dtype -> torch dtype
69
+ numpy_to_torch_dtype_dict = {
70
+ np.uint8: torch.uint8,
71
+ np.int8: torch.int8,
72
+ np.int16: torch.int16,
73
+ np.int32: torch.int32,
74
+ np.int64: torch.int64,
75
+ np.float16: torch.float16,
76
+ np.float32: torch.float32,
77
+ np.float64: torch.float64,
78
+ np.complex64: torch.complex64,
79
+ np.complex128: torch.complex128,
80
+ }
81
+ if np.version.full_version >= "1.24.0":
82
+ numpy_to_torch_dtype_dict[np.bool_] = torch.bool
83
+ else:
84
+ numpy_to_torch_dtype_dict[np.bool] = torch.bool
85
+
86
+ # Map of torch dtype -> numpy dtype
87
+ torch_to_numpy_dtype_dict = {value: key for (key, value) in numpy_to_torch_dtype_dict.items()}
88
+
89
+
90
+ def device_view(t):
91
+ return cuda.DeviceView(ptr=t.data_ptr(), shape=t.shape, dtype=torch_to_numpy_dtype_dict[t.dtype])
92
+
93
+
94
+ def preprocess_image(image):
95
+ """
96
+ image: torch.Tensor
97
+ """
98
+ w, h = image.size
99
+ w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
100
+ image = image.resize((w, h))
101
+ image = np.array(image).astype(np.float32) / 255.0
102
+ image = image[None].transpose(0, 3, 1, 2)
103
+ image = torch.from_numpy(image).contiguous()
104
+ return 2.0 * image - 1.0
105
+
106
+
107
+ class Engine:
108
+ def __init__(self, engine_path):
109
+ self.engine_path = engine_path
110
+ self.engine = None
111
+ self.context = None
112
+ self.buffers = OrderedDict()
113
+ self.tensors = OrderedDict()
114
+
115
+ def __del__(self):
116
+ [buf.free() for buf in self.buffers.values() if isinstance(buf, cuda.DeviceArray)]
117
+ del self.engine
118
+ del self.context
119
+ del self.buffers
120
+ del self.tensors
121
+
122
+ def build(
123
+ self,
124
+ onnx_path,
125
+ fp16,
126
+ input_profile=None,
127
+ enable_preview=False,
128
+ enable_all_tactics=False,
129
+ timing_cache=None,
130
+ workspace_size=0,
131
+ ):
132
+ logger.warning(f"Building TensorRT engine for {onnx_path}: {self.engine_path}")
133
+ p = Profile()
134
+ if input_profile:
135
+ for name, dims in input_profile.items():
136
+ assert len(dims) == 3
137
+ p.add(name, min=dims[0], opt=dims[1], max=dims[2])
138
+
139
+ config_kwargs = {}
140
+
141
+ config_kwargs["preview_features"] = [trt.PreviewFeature.DISABLE_EXTERNAL_TACTIC_SOURCES_FOR_CORE_0805]
142
+ if enable_preview:
143
+ # Faster dynamic shapes made optional since it increases engine build time.
144
+ config_kwargs["preview_features"].append(trt.PreviewFeature.FASTER_DYNAMIC_SHAPES_0805)
145
+ if workspace_size > 0:
146
+ config_kwargs["memory_pool_limits"] = {trt.MemoryPoolType.WORKSPACE: workspace_size}
147
+ if not enable_all_tactics:
148
+ config_kwargs["tactic_sources"] = []
149
+
150
+ engine = engine_from_network(
151
+ network_from_onnx_path(onnx_path, flags=[trt.OnnxParserFlag.NATIVE_INSTANCENORM]),
152
+ config=CreateConfig(fp16=fp16, profiles=[p], load_timing_cache=timing_cache, **config_kwargs),
153
+ save_timing_cache=timing_cache,
154
+ )
155
+ save_engine(engine, path=self.engine_path)
156
+
157
+ def load(self):
158
+ logger.warning(f"Loading TensorRT engine: {self.engine_path}")
159
+ self.engine = engine_from_bytes(bytes_from_path(self.engine_path))
160
+
161
+ def activate(self):
162
+ self.context = self.engine.create_execution_context()
163
+
164
+ def allocate_buffers(self, shape_dict=None, device="cuda"):
165
+ for idx in range(trt_util.get_bindings_per_profile(self.engine)):
166
+ binding = self.engine[idx]
167
+ if shape_dict and binding in shape_dict:
168
+ shape = shape_dict[binding]
169
+ else:
170
+ shape = self.engine.get_binding_shape(binding)
171
+ dtype = trt.nptype(self.engine.get_binding_dtype(binding))
172
+ if self.engine.binding_is_input(binding):
173
+ self.context.set_binding_shape(idx, shape)
174
+ tensor = torch.empty(tuple(shape), dtype=numpy_to_torch_dtype_dict[dtype]).to(device=device)
175
+ self.tensors[binding] = tensor
176
+ self.buffers[binding] = cuda.DeviceView(ptr=tensor.data_ptr(), shape=shape, dtype=dtype)
177
+
178
+ def infer(self, feed_dict, stream):
179
+ start_binding, end_binding = trt_util.get_active_profile_bindings(self.context)
180
+ # shallow copy of ordered dict
181
+ device_buffers = copy(self.buffers)
182
+ for name, buf in feed_dict.items():
183
+ assert isinstance(buf, cuda.DeviceView)
184
+ device_buffers[name] = buf
185
+ bindings = [0] * start_binding + [buf.ptr for buf in device_buffers.values()]
186
+ noerror = self.context.execute_async_v2(bindings=bindings, stream_handle=stream.ptr)
187
+ if not noerror:
188
+ raise ValueError("ERROR: inference failed.")
189
+
190
+ return self.tensors
191
+
192
+
193
+ class Optimizer:
194
+ def __init__(self, onnx_graph):
195
+ self.graph = gs.import_onnx(onnx_graph)
196
+
197
+ def cleanup(self, return_onnx=False):
198
+ self.graph.cleanup().toposort()
199
+ if return_onnx:
200
+ return gs.export_onnx(self.graph)
201
+
202
+ def select_outputs(self, keep, names=None):
203
+ self.graph.outputs = [self.graph.outputs[o] for o in keep]
204
+ if names:
205
+ for i, name in enumerate(names):
206
+ self.graph.outputs[i].name = name
207
+
208
+ def fold_constants(self, return_onnx=False):
209
+ onnx_graph = fold_constants(gs.export_onnx(self.graph), allow_onnxruntime_shape_inference=True)
210
+ self.graph = gs.import_onnx(onnx_graph)
211
+ if return_onnx:
212
+ return onnx_graph
213
+
214
+ def infer_shapes(self, return_onnx=False):
215
+ onnx_graph = gs.export_onnx(self.graph)
216
+ if onnx_graph.ByteSize() > 2147483648:
217
+ raise TypeError("ERROR: model size exceeds supported 2GB limit")
218
+ else:
219
+ onnx_graph = shape_inference.infer_shapes(onnx_graph)
220
+
221
+ self.graph = gs.import_onnx(onnx_graph)
222
+ if return_onnx:
223
+ return onnx_graph
224
+
225
+
226
+ class BaseModel:
227
+ def __init__(self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77):
228
+ self.model = model
229
+ self.name = "SD Model"
230
+ self.fp16 = fp16
231
+ self.device = device
232
+
233
+ self.min_batch = 1
234
+ self.max_batch = max_batch_size
235
+ self.min_image_shape = 256 # min image resolution: 256x256
236
+ self.max_image_shape = 1024 # max image resolution: 1024x1024
237
+ self.min_latent_shape = self.min_image_shape // 8
238
+ self.max_latent_shape = self.max_image_shape // 8
239
+
240
+ self.embedding_dim = embedding_dim
241
+ self.text_maxlen = text_maxlen
242
+
243
+ def get_model(self):
244
+ return self.model
245
+
246
+ def get_input_names(self):
247
+ pass
248
+
249
+ def get_output_names(self):
250
+ pass
251
+
252
+ def get_dynamic_axes(self):
253
+ return None
254
+
255
+ def get_sample_input(self, batch_size, image_height, image_width):
256
+ pass
257
+
258
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
259
+ return None
260
+
261
+ def get_shape_dict(self, batch_size, image_height, image_width):
262
+ return None
263
+
264
+ def optimize(self, onnx_graph):
265
+ opt = Optimizer(onnx_graph)
266
+ opt.cleanup()
267
+ opt.fold_constants()
268
+ opt.infer_shapes()
269
+ onnx_opt_graph = opt.cleanup(return_onnx=True)
270
+ return onnx_opt_graph
271
+
272
+ def check_dims(self, batch_size, image_height, image_width):
273
+ assert batch_size >= self.min_batch and batch_size <= self.max_batch
274
+ assert image_height % 8 == 0 or image_width % 8 == 0
275
+ latent_height = image_height // 8
276
+ latent_width = image_width // 8
277
+ assert latent_height >= self.min_latent_shape and latent_height <= self.max_latent_shape
278
+ assert latent_width >= self.min_latent_shape and latent_width <= self.max_latent_shape
279
+ return (latent_height, latent_width)
280
+
281
+ def get_minmax_dims(self, batch_size, image_height, image_width, static_batch, static_shape):
282
+ min_batch = batch_size if static_batch else self.min_batch
283
+ max_batch = batch_size if static_batch else self.max_batch
284
+ latent_height = image_height // 8
285
+ latent_width = image_width // 8
286
+ min_image_height = image_height if static_shape else self.min_image_shape
287
+ max_image_height = image_height if static_shape else self.max_image_shape
288
+ min_image_width = image_width if static_shape else self.min_image_shape
289
+ max_image_width = image_width if static_shape else self.max_image_shape
290
+ min_latent_height = latent_height if static_shape else self.min_latent_shape
291
+ max_latent_height = latent_height if static_shape else self.max_latent_shape
292
+ min_latent_width = latent_width if static_shape else self.min_latent_shape
293
+ max_latent_width = latent_width if static_shape else self.max_latent_shape
294
+ return (
295
+ min_batch,
296
+ max_batch,
297
+ min_image_height,
298
+ max_image_height,
299
+ min_image_width,
300
+ max_image_width,
301
+ min_latent_height,
302
+ max_latent_height,
303
+ min_latent_width,
304
+ max_latent_width,
305
+ )
306
+
307
+
308
+ def getOnnxPath(model_name, onnx_dir, opt=True):
309
+ return os.path.join(onnx_dir, model_name + (".opt" if opt else "") + ".onnx")
310
+
311
+
312
+ def getEnginePath(model_name, engine_dir):
313
+ return os.path.join(engine_dir, model_name + ".plan")
314
+
315
+
316
+ def build_engines(
317
+ models: dict,
318
+ engine_dir,
319
+ onnx_dir,
320
+ onnx_opset,
321
+ opt_image_height,
322
+ opt_image_width,
323
+ opt_batch_size=1,
324
+ force_engine_rebuild=False,
325
+ static_batch=False,
326
+ static_shape=True,
327
+ enable_preview=False,
328
+ enable_all_tactics=False,
329
+ timing_cache=None,
330
+ max_workspace_size=0,
331
+ ):
332
+ built_engines = {}
333
+ if not os.path.isdir(onnx_dir):
334
+ os.makedirs(onnx_dir)
335
+ if not os.path.isdir(engine_dir):
336
+ os.makedirs(engine_dir)
337
+
338
+ # Export models to ONNX
339
+ for model_name, model_obj in models.items():
340
+ engine_path = getEnginePath(model_name, engine_dir)
341
+ if force_engine_rebuild or not os.path.exists(engine_path):
342
+ logger.warning("Building Engines...")
343
+ logger.warning("Engine build can take a while to complete")
344
+ onnx_path = getOnnxPath(model_name, onnx_dir, opt=False)
345
+ onnx_opt_path = getOnnxPath(model_name, onnx_dir)
346
+ if force_engine_rebuild or not os.path.exists(onnx_opt_path):
347
+ if force_engine_rebuild or not os.path.exists(onnx_path):
348
+ logger.warning(f"Exporting model: {onnx_path}")
349
+ model = model_obj.get_model()
350
+ with torch.inference_mode(), torch.autocast("cuda"):
351
+ inputs = model_obj.get_sample_input(opt_batch_size, opt_image_height, opt_image_width)
352
+ torch.onnx.export(
353
+ model,
354
+ inputs,
355
+ onnx_path,
356
+ export_params=True,
357
+ opset_version=onnx_opset,
358
+ do_constant_folding=True,
359
+ input_names=model_obj.get_input_names(),
360
+ output_names=model_obj.get_output_names(),
361
+ dynamic_axes=model_obj.get_dynamic_axes(),
362
+ )
363
+ del model
364
+ torch.cuda.empty_cache()
365
+ gc.collect()
366
+ else:
367
+ logger.warning(f"Found cached model: {onnx_path}")
368
+
369
+ # Optimize onnx
370
+ if force_engine_rebuild or not os.path.exists(onnx_opt_path):
371
+ logger.warning(f"Generating optimizing model: {onnx_opt_path}")
372
+ onnx_opt_graph = model_obj.optimize(onnx.load(onnx_path))
373
+ onnx.save(onnx_opt_graph, onnx_opt_path)
374
+ else:
375
+ logger.warning(f"Found cached optimized model: {onnx_opt_path} ")
376
+
377
+ # Build TensorRT engines
378
+ for model_name, model_obj in models.items():
379
+ engine_path = getEnginePath(model_name, engine_dir)
380
+ engine = Engine(engine_path)
381
+ onnx_path = getOnnxPath(model_name, onnx_dir, opt=False)
382
+ onnx_opt_path = getOnnxPath(model_name, onnx_dir)
383
+
384
+ if force_engine_rebuild or not os.path.exists(engine.engine_path):
385
+ engine.build(
386
+ onnx_opt_path,
387
+ fp16=True,
388
+ input_profile=model_obj.get_input_profile(
389
+ opt_batch_size,
390
+ opt_image_height,
391
+ opt_image_width,
392
+ static_batch=static_batch,
393
+ static_shape=static_shape,
394
+ ),
395
+ enable_preview=enable_preview,
396
+ timing_cache=timing_cache,
397
+ workspace_size=max_workspace_size,
398
+ )
399
+ built_engines[model_name] = engine
400
+
401
+ # Load and activate TensorRT engines
402
+ for model_name, model_obj in models.items():
403
+ engine = built_engines[model_name]
404
+ engine.load()
405
+ engine.activate()
406
+
407
+ return built_engines
408
+
409
+
410
+ def runEngine(engine, feed_dict, stream):
411
+ return engine.infer(feed_dict, stream)
412
+
413
+
414
+ class CLIP(BaseModel):
415
+ def __init__(self, model, device, max_batch_size, embedding_dim):
416
+ super(CLIP, self).__init__(
417
+ model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
418
+ )
419
+ self.name = "CLIP"
420
+
421
+ def get_input_names(self):
422
+ return ["input_ids"]
423
+
424
+ def get_output_names(self):
425
+ return ["text_embeddings", "pooler_output"]
426
+
427
+ def get_dynamic_axes(self):
428
+ return {"input_ids": {0: "B"}, "text_embeddings": {0: "B"}}
429
+
430
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
431
+ self.check_dims(batch_size, image_height, image_width)
432
+ min_batch, max_batch, _, _, _, _, _, _, _, _ = self.get_minmax_dims(
433
+ batch_size, image_height, image_width, static_batch, static_shape
434
+ )
435
+ return {
436
+ "input_ids": [(min_batch, self.text_maxlen), (batch_size, self.text_maxlen), (max_batch, self.text_maxlen)]
437
+ }
438
+
439
+ def get_shape_dict(self, batch_size, image_height, image_width):
440
+ self.check_dims(batch_size, image_height, image_width)
441
+ return {
442
+ "input_ids": (batch_size, self.text_maxlen),
443
+ "text_embeddings": (batch_size, self.text_maxlen, self.embedding_dim),
444
+ }
445
+
446
+ def get_sample_input(self, batch_size, image_height, image_width):
447
+ self.check_dims(batch_size, image_height, image_width)
448
+ return torch.zeros(batch_size, self.text_maxlen, dtype=torch.int32, device=self.device)
449
+
450
+ def optimize(self, onnx_graph):
451
+ opt = Optimizer(onnx_graph)
452
+ opt.select_outputs([0]) # delete graph output#1
453
+ opt.cleanup()
454
+ opt.fold_constants()
455
+ opt.infer_shapes()
456
+ opt.select_outputs([0], names=["text_embeddings"]) # rename network output
457
+ opt_onnx_graph = opt.cleanup(return_onnx=True)
458
+ return opt_onnx_graph
459
+
460
+
461
+ def make_CLIP(model, device, max_batch_size, embedding_dim, inpaint=False):
462
+ return CLIP(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
463
+
464
+
465
+ class UNet(BaseModel):
466
+ def __init__(
467
+ self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77, unet_dim=4
468
+ ):
469
+ super(UNet, self).__init__(
470
+ model=model,
471
+ fp16=fp16,
472
+ device=device,
473
+ max_batch_size=max_batch_size,
474
+ embedding_dim=embedding_dim,
475
+ text_maxlen=text_maxlen,
476
+ )
477
+ self.unet_dim = unet_dim
478
+ self.name = "UNet"
479
+
480
+ def get_input_names(self):
481
+ return ["sample", "timestep", "encoder_hidden_states"]
482
+
483
+ def get_output_names(self):
484
+ return ["latent"]
485
+
486
+ def get_dynamic_axes(self):
487
+ return {
488
+ "sample": {0: "2B", 2: "H", 3: "W"},
489
+ "encoder_hidden_states": {0: "2B"},
490
+ "latent": {0: "2B", 2: "H", 3: "W"},
491
+ }
492
+
493
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
494
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
495
+ (
496
+ min_batch,
497
+ max_batch,
498
+ _,
499
+ _,
500
+ _,
501
+ _,
502
+ min_latent_height,
503
+ max_latent_height,
504
+ min_latent_width,
505
+ max_latent_width,
506
+ ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
507
+ return {
508
+ "sample": [
509
+ (2 * min_batch, self.unet_dim, min_latent_height, min_latent_width),
510
+ (2 * batch_size, self.unet_dim, latent_height, latent_width),
511
+ (2 * max_batch, self.unet_dim, max_latent_height, max_latent_width),
512
+ ],
513
+ "encoder_hidden_states": [
514
+ (2 * min_batch, self.text_maxlen, self.embedding_dim),
515
+ (2 * batch_size, self.text_maxlen, self.embedding_dim),
516
+ (2 * max_batch, self.text_maxlen, self.embedding_dim),
517
+ ],
518
+ }
519
+
520
+ def get_shape_dict(self, batch_size, image_height, image_width):
521
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
522
+ return {
523
+ "sample": (2 * batch_size, self.unet_dim, latent_height, latent_width),
524
+ "encoder_hidden_states": (2 * batch_size, self.text_maxlen, self.embedding_dim),
525
+ "latent": (2 * batch_size, 4, latent_height, latent_width),
526
+ }
527
+
528
+ def get_sample_input(self, batch_size, image_height, image_width):
529
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
530
+ dtype = torch.float16 if self.fp16 else torch.float32
531
+ return (
532
+ torch.randn(
533
+ 2 * batch_size, self.unet_dim, latent_height, latent_width, dtype=torch.float32, device=self.device
534
+ ),
535
+ torch.tensor([1.0], dtype=torch.float32, device=self.device),
536
+ torch.randn(2 * batch_size, self.text_maxlen, self.embedding_dim, dtype=dtype, device=self.device),
537
+ )
538
+
539
+
540
+ def make_UNet(model, device, max_batch_size, embedding_dim, inpaint=False, unet_dim=4):
541
+ return UNet(
542
+ model,
543
+ fp16=True,
544
+ device=device,
545
+ max_batch_size=max_batch_size,
546
+ embedding_dim=embedding_dim,
547
+ unet_dim=unet_dim,
548
+ )
549
+
550
+
551
+ class VAE(BaseModel):
552
+ def __init__(self, model, device, max_batch_size, embedding_dim):
553
+ super(VAE, self).__init__(
554
+ model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
555
+ )
556
+ self.name = "VAE decoder"
557
+
558
+ def get_input_names(self):
559
+ return ["latent"]
560
+
561
+ def get_output_names(self):
562
+ return ["images"]
563
+
564
+ def get_dynamic_axes(self):
565
+ return {"latent": {0: "B", 2: "H", 3: "W"}, "images": {0: "B", 2: "8H", 3: "8W"}}
566
+
567
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
568
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
569
+ (
570
+ min_batch,
571
+ max_batch,
572
+ _,
573
+ _,
574
+ _,
575
+ _,
576
+ min_latent_height,
577
+ max_latent_height,
578
+ min_latent_width,
579
+ max_latent_width,
580
+ ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
581
+ return {
582
+ "latent": [
583
+ (min_batch, 4, min_latent_height, min_latent_width),
584
+ (batch_size, 4, latent_height, latent_width),
585
+ (max_batch, 4, max_latent_height, max_latent_width),
586
+ ]
587
+ }
588
+
589
+ def get_shape_dict(self, batch_size, image_height, image_width):
590
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
591
+ return {
592
+ "latent": (batch_size, 4, latent_height, latent_width),
593
+ "images": (batch_size, 3, image_height, image_width),
594
+ }
595
+
596
+ def get_sample_input(self, batch_size, image_height, image_width):
597
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
598
+ return torch.randn(batch_size, 4, latent_height, latent_width, dtype=torch.float32, device=self.device)
599
+
600
+
601
+ def make_VAE(model, device, max_batch_size, embedding_dim, inpaint=False):
602
+ return VAE(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
603
+
604
+
605
+ class TorchVAEEncoder(torch.nn.Module):
606
+ def __init__(self, model):
607
+ super().__init__()
608
+ self.vae_encoder = model
609
+
610
+ def forward(self, x):
611
+ return self.vae_encoder.encode(x).latent_dist.sample()
612
+
613
+
614
+ class VAEEncoder(BaseModel):
615
+ def __init__(self, model, device, max_batch_size, embedding_dim):
616
+ super(VAEEncoder, self).__init__(
617
+ model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
618
+ )
619
+ self.name = "VAE encoder"
620
+
621
+ def get_model(self):
622
+ vae_encoder = TorchVAEEncoder(self.model)
623
+ return vae_encoder
624
+
625
+ def get_input_names(self):
626
+ return ["images"]
627
+
628
+ def get_output_names(self):
629
+ return ["latent"]
630
+
631
+ def get_dynamic_axes(self):
632
+ return {"images": {0: "B", 2: "8H", 3: "8W"}, "latent": {0: "B", 2: "H", 3: "W"}}
633
+
634
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
635
+ assert batch_size >= self.min_batch and batch_size <= self.max_batch
636
+ min_batch = batch_size if static_batch else self.min_batch
637
+ max_batch = batch_size if static_batch else self.max_batch
638
+ self.check_dims(batch_size, image_height, image_width)
639
+ (
640
+ min_batch,
641
+ max_batch,
642
+ min_image_height,
643
+ max_image_height,
644
+ min_image_width,
645
+ max_image_width,
646
+ _,
647
+ _,
648
+ _,
649
+ _,
650
+ ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
651
+
652
+ return {
653
+ "images": [
654
+ (min_batch, 3, min_image_height, min_image_width),
655
+ (batch_size, 3, image_height, image_width),
656
+ (max_batch, 3, max_image_height, max_image_width),
657
+ ]
658
+ }
659
+
660
+ def get_shape_dict(self, batch_size, image_height, image_width):
661
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
662
+ return {
663
+ "images": (batch_size, 3, image_height, image_width),
664
+ "latent": (batch_size, 4, latent_height, latent_width),
665
+ }
666
+
667
+ def get_sample_input(self, batch_size, image_height, image_width):
668
+ self.check_dims(batch_size, image_height, image_width)
669
+ return torch.randn(batch_size, 3, image_height, image_width, dtype=torch.float32, device=self.device)
670
+
671
+
672
+ def make_VAEEncoder(model, device, max_batch_size, embedding_dim, inpaint=False):
673
+ return VAEEncoder(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
674
+
675
+
676
+ class TensorRTStableDiffusionInpaintPipeline(StableDiffusionInpaintPipeline):
677
+ r"""
678
+ Pipeline for inpainting using TensorRT accelerated Stable Diffusion.
679
+
680
+ This model inherits from [`StableDiffusionInpaintPipeline`]. Check the superclass documentation for the generic methods the
681
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
682
+
683
+ Args:
684
+ vae ([`AutoencoderKL`]):
685
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
686
+ text_encoder ([`CLIPTextModel`]):
687
+ Frozen text-encoder. Stable Diffusion uses the text portion of
688
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
689
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
690
+ tokenizer (`CLIPTokenizer`):
691
+ Tokenizer of class
692
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
693
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
694
+ scheduler ([`SchedulerMixin`]):
695
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
696
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
697
+ safety_checker ([`StableDiffusionSafetyChecker`]):
698
+ Classification module that estimates whether generated images could be considered offensive or harmful.
699
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
700
+ feature_extractor ([`CLIPFeatureExtractor`]):
701
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
702
+ """
703
+
704
+ def __init__(
705
+ self,
706
+ vae: AutoencoderKL,
707
+ text_encoder: CLIPTextModel,
708
+ tokenizer: CLIPTokenizer,
709
+ unet: UNet2DConditionModel,
710
+ scheduler: DDIMScheduler,
711
+ safety_checker: StableDiffusionSafetyChecker,
712
+ feature_extractor: CLIPFeatureExtractor,
713
+ requires_safety_checker: bool = True,
714
+ stages=["clip", "unet", "vae", "vae_encoder"],
715
+ image_height: int = 512,
716
+ image_width: int = 512,
717
+ max_batch_size: int = 16,
718
+ # ONNX export parameters
719
+ onnx_opset: int = 17,
720
+ onnx_dir: str = "onnx",
721
+ # TensorRT engine build parameters
722
+ engine_dir: str = "engine",
723
+ build_preview_features: bool = True,
724
+ force_engine_rebuild: bool = False,
725
+ timing_cache: str = "timing_cache",
726
+ ):
727
+ super().__init__(
728
+ vae, text_encoder, tokenizer, unet, scheduler, safety_checker, feature_extractor, requires_safety_checker
729
+ )
730
+
731
+ self.vae.forward = self.vae.decode
732
+
733
+ self.stages = stages
734
+ self.image_height, self.image_width = image_height, image_width
735
+ self.inpaint = True
736
+ self.onnx_opset = onnx_opset
737
+ self.onnx_dir = onnx_dir
738
+ self.engine_dir = engine_dir
739
+ self.force_engine_rebuild = force_engine_rebuild
740
+ self.timing_cache = timing_cache
741
+ self.build_static_batch = False
742
+ self.build_dynamic_shape = False
743
+ self.build_preview_features = build_preview_features
744
+
745
+ self.max_batch_size = max_batch_size
746
+ # TODO: Restrict batch size to 4 for larger image dimensions as a WAR for TensorRT limitation.
747
+ if self.build_dynamic_shape or self.image_height > 512 or self.image_width > 512:
748
+ self.max_batch_size = 4
749
+
750
+ self.stream = None # loaded in loadResources()
751
+ self.models = {} # loaded in __loadModels()
752
+ self.engine = {} # loaded in build_engines()
753
+
754
+ def __loadModels(self):
755
+ # Load pipeline models
756
+ self.embedding_dim = self.text_encoder.config.hidden_size
757
+ models_args = {
758
+ "device": self.torch_device,
759
+ "max_batch_size": self.max_batch_size,
760
+ "embedding_dim": self.embedding_dim,
761
+ "inpaint": self.inpaint,
762
+ }
763
+ if "clip" in self.stages:
764
+ self.models["clip"] = make_CLIP(self.text_encoder, **models_args)
765
+ if "unet" in self.stages:
766
+ self.models["unet"] = make_UNet(self.unet, **models_args, unet_dim=self.unet.config.in_channels)
767
+ if "vae" in self.stages:
768
+ self.models["vae"] = make_VAE(self.vae, **models_args)
769
+ if "vae_encoder" in self.stages:
770
+ self.models["vae_encoder"] = make_VAEEncoder(self.vae, **models_args)
771
+
772
+ @classmethod
773
+ def set_cached_folder(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs):
774
+ cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
775
+ resume_download = kwargs.pop("resume_download", False)
776
+ proxies = kwargs.pop("proxies", None)
777
+ local_files_only = kwargs.pop("local_files_only", False)
778
+ use_auth_token = kwargs.pop("use_auth_token", None)
779
+ revision = kwargs.pop("revision", None)
780
+
781
+ cls.cached_folder = (
782
+ pretrained_model_name_or_path
783
+ if os.path.isdir(pretrained_model_name_or_path)
784
+ else snapshot_download(
785
+ pretrained_model_name_or_path,
786
+ cache_dir=cache_dir,
787
+ resume_download=resume_download,
788
+ proxies=proxies,
789
+ local_files_only=local_files_only,
790
+ use_auth_token=use_auth_token,
791
+ revision=revision,
792
+ )
793
+ )
794
+
795
+ def to(self, torch_device: Optional[Union[str, torch.device]] = None, silence_dtype_warnings: bool = False):
796
+ super().to(torch_device, silence_dtype_warnings=silence_dtype_warnings)
797
+
798
+ self.onnx_dir = os.path.join(self.cached_folder, self.onnx_dir)
799
+ self.engine_dir = os.path.join(self.cached_folder, self.engine_dir)
800
+ self.timing_cache = os.path.join(self.cached_folder, self.timing_cache)
801
+
802
+ # set device
803
+ self.torch_device = self._execution_device
804
+ logger.warning(f"Running inference on device: {self.torch_device}")
805
+
806
+ # load models
807
+ self.__loadModels()
808
+
809
+ # build engines
810
+ self.engine = build_engines(
811
+ self.models,
812
+ self.engine_dir,
813
+ self.onnx_dir,
814
+ self.onnx_opset,
815
+ opt_image_height=self.image_height,
816
+ opt_image_width=self.image_width,
817
+ force_engine_rebuild=self.force_engine_rebuild,
818
+ static_batch=self.build_static_batch,
819
+ static_shape=not self.build_dynamic_shape,
820
+ enable_preview=self.build_preview_features,
821
+ timing_cache=self.timing_cache,
822
+ )
823
+
824
+ return self
825
+
826
+ def __initialize_timesteps(self, timesteps, strength):
827
+ self.scheduler.set_timesteps(timesteps)
828
+ offset = self.scheduler.steps_offset if hasattr(self.scheduler, "steps_offset") else 0
829
+ init_timestep = int(timesteps * strength) + offset
830
+ init_timestep = min(init_timestep, timesteps)
831
+ t_start = max(timesteps - init_timestep + offset, 0)
832
+ timesteps = self.scheduler.timesteps[t_start:].to(self.torch_device)
833
+ return timesteps, t_start
834
+
835
+ def __preprocess_images(self, batch_size, images=()):
836
+ init_images = []
837
+ for image in images:
838
+ image = image.to(self.torch_device).float()
839
+ image = image.repeat(batch_size, 1, 1, 1)
840
+ init_images.append(image)
841
+ return tuple(init_images)
842
+
843
+ def __encode_image(self, init_image):
844
+ init_latents = runEngine(self.engine["vae_encoder"], {"images": device_view(init_image)}, self.stream)[
845
+ "latent"
846
+ ]
847
+ init_latents = 0.18215 * init_latents
848
+ return init_latents
849
+
850
+ def __encode_prompt(self, prompt, negative_prompt):
851
+ r"""
852
+ Encodes the prompt into text encoder hidden states.
853
+
854
+ Args:
855
+ prompt (`str` or `List[str]`, *optional*):
856
+ prompt to be encoded
857
+ negative_prompt (`str` or `List[str]`, *optional*):
858
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
859
+ `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
860
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
861
+ """
862
+ # Tokenize prompt
863
+ text_input_ids = (
864
+ self.tokenizer(
865
+ prompt,
866
+ padding="max_length",
867
+ max_length=self.tokenizer.model_max_length,
868
+ truncation=True,
869
+ return_tensors="pt",
870
+ )
871
+ .input_ids.type(torch.int32)
872
+ .to(self.torch_device)
873
+ )
874
+
875
+ text_input_ids_inp = device_view(text_input_ids)
876
+ # NOTE: output tensor for CLIP must be cloned because it will be overwritten when called again for negative prompt
877
+ text_embeddings = runEngine(self.engine["clip"], {"input_ids": text_input_ids_inp}, self.stream)[
878
+ "text_embeddings"
879
+ ].clone()
880
+
881
+ # Tokenize negative prompt
882
+ uncond_input_ids = (
883
+ self.tokenizer(
884
+ negative_prompt,
885
+ padding="max_length",
886
+ max_length=self.tokenizer.model_max_length,
887
+ truncation=True,
888
+ return_tensors="pt",
889
+ )
890
+ .input_ids.type(torch.int32)
891
+ .to(self.torch_device)
892
+ )
893
+ uncond_input_ids_inp = device_view(uncond_input_ids)
894
+ uncond_embeddings = runEngine(self.engine["clip"], {"input_ids": uncond_input_ids_inp}, self.stream)[
895
+ "text_embeddings"
896
+ ]
897
+
898
+ # Concatenate the unconditional and text embeddings into a single batch to avoid doing two forward passes for classifier free guidance
899
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings]).to(dtype=torch.float16)
900
+
901
+ return text_embeddings
902
+
903
+ def __denoise_latent(
904
+ self, latents, text_embeddings, timesteps=None, step_offset=0, mask=None, masked_image_latents=None
905
+ ):
906
+ if not isinstance(timesteps, torch.Tensor):
907
+ timesteps = self.scheduler.timesteps
908
+ for step_index, timestep in enumerate(timesteps):
909
+ # Expand the latents if we are doing classifier free guidance
910
+ latent_model_input = torch.cat([latents] * 2)
911
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, timestep)
912
+ if isinstance(mask, torch.Tensor):
913
+ latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)
914
+
915
+ # Predict the noise residual
916
+ timestep_float = timestep.float() if timestep.dtype != torch.float32 else timestep
917
+
918
+ sample_inp = device_view(latent_model_input)
919
+ timestep_inp = device_view(timestep_float)
920
+ embeddings_inp = device_view(text_embeddings)
921
+ noise_pred = runEngine(
922
+ self.engine["unet"],
923
+ {"sample": sample_inp, "timestep": timestep_inp, "encoder_hidden_states": embeddings_inp},
924
+ self.stream,
925
+ )["latent"]
926
+
927
+ # Perform guidance
928
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
929
+ noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond)
930
+
931
+ latents = self.scheduler.step(noise_pred, timestep, latents).prev_sample
932
+
933
+ latents = 1.0 / 0.18215 * latents
934
+ return latents
935
+
936
+ def __decode_latent(self, latents):
937
+ images = runEngine(self.engine["vae"], {"latent": device_view(latents)}, self.stream)["images"]
938
+ images = (images / 2 + 0.5).clamp(0, 1)
939
+ return images.cpu().permute(0, 2, 3, 1).float().numpy()
940
+
941
+ def __loadResources(self, image_height, image_width, batch_size):
942
+ self.stream = cuda.Stream()
943
+
944
+ # Allocate buffers for TensorRT engine bindings
945
+ for model_name, obj in self.models.items():
946
+ self.engine[model_name].allocate_buffers(
947
+ shape_dict=obj.get_shape_dict(batch_size, image_height, image_width), device=self.torch_device
948
+ )
949
+
950
+ @torch.no_grad()
951
+ def __call__(
952
+ self,
953
+ prompt: Union[str, List[str]] = None,
954
+ image: Union[torch.FloatTensor, PIL.Image.Image] = None,
955
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
956
+ strength: float = 0.75,
957
+ num_inference_steps: int = 50,
958
+ guidance_scale: float = 7.5,
959
+ negative_prompt: Optional[Union[str, List[str]]] = None,
960
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
961
+ ):
962
+ r"""
963
+ Function invoked when calling the pipeline for generation.
964
+
965
+ Args:
966
+ prompt (`str` or `List[str]`, *optional*):
967
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
968
+ instead.
969
+ image (`PIL.Image.Image`):
970
+ `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
971
+ be masked out with `mask_image` and repainted according to `prompt`.
972
+ mask_image (`PIL.Image.Image`):
973
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
974
+ repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
975
+ to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
976
+ instead of 3, so the expected shape would be `(B, H, W, 1)`.
977
+ strength (`float`, *optional*, defaults to 0.8):
978
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
979
+ will be used as a starting point, adding more noise to it the larger the `strength`. The number of
980
+ denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
981
+ be maximum and the denoising process will run for the full number of iterations specified in
982
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
983
+ num_inference_steps (`int`, *optional*, defaults to 50):
984
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
985
+ expense of slower inference.
986
+ guidance_scale (`float`, *optional*, defaults to 7.5):
987
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
988
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
989
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
990
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
991
+ usually at the expense of lower image quality.
992
+ negative_prompt (`str` or `List[str]`, *optional*):
993
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
994
+ `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
995
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
996
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
997
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
998
+ to make generation deterministic.
999
+
1000
+ """
1001
+ self.generator = generator
1002
+ self.denoising_steps = num_inference_steps
1003
+ self.guidance_scale = guidance_scale
1004
+
1005
+ # Pre-compute latent input scales and linear multistep coefficients
1006
+ self.scheduler.set_timesteps(self.denoising_steps, device=self.torch_device)
1007
+
1008
+ # Define call parameters
1009
+ if prompt is not None and isinstance(prompt, str):
1010
+ batch_size = 1
1011
+ prompt = [prompt]
1012
+ elif prompt is not None and isinstance(prompt, list):
1013
+ batch_size = len(prompt)
1014
+ else:
1015
+ raise ValueError(f"Expected prompt to be of type list or str but got {type(prompt)}")
1016
+
1017
+ if negative_prompt is None:
1018
+ negative_prompt = [""] * batch_size
1019
+
1020
+ if negative_prompt is not None and isinstance(negative_prompt, str):
1021
+ negative_prompt = [negative_prompt]
1022
+
1023
+ assert len(prompt) == len(negative_prompt)
1024
+
1025
+ if batch_size > self.max_batch_size:
1026
+ raise ValueError(
1027
+ f"Batch size {len(prompt)} is larger than allowed {self.max_batch_size}. If dynamic shape is used, then maximum batch size is 4"
1028
+ )
1029
+
1030
+ # Validate image dimensions
1031
+ mask_width, mask_height = mask_image.size
1032
+ if mask_height != self.image_height or mask_width != self.image_width:
1033
+ raise ValueError(
1034
+ f"Input image height and width {self.image_height} and {self.image_width} are not equal to "
1035
+ f"the respective dimensions of the mask image {mask_height} and {mask_width}"
1036
+ )
1037
+
1038
+ # load resources
1039
+ self.__loadResources(self.image_height, self.image_width, batch_size)
1040
+
1041
+ with torch.inference_mode(), torch.autocast("cuda"), trt.Runtime(TRT_LOGGER):
1042
+ # Spatial dimensions of latent tensor
1043
+ latent_height = self.image_height // 8
1044
+ latent_width = self.image_width // 8
1045
+
1046
+ # Pre-initialize latents
1047
+ num_channels_latents = self.vae.config.latent_channels
1048
+ latents = self.prepare_latents(
1049
+ batch_size,
1050
+ num_channels_latents,
1051
+ self.image_height,
1052
+ self.image_width,
1053
+ torch.float32,
1054
+ self.torch_device,
1055
+ generator,
1056
+ )
1057
+
1058
+ # Pre-process input images
1059
+ mask, masked_image = self.__preprocess_images(batch_size, prepare_mask_and_masked_image(image, mask_image))
1060
+ # print(mask)
1061
+ mask = torch.nn.functional.interpolate(mask, size=(latent_height, latent_width))
1062
+ mask = torch.cat([mask] * 2)
1063
+
1064
+ # Initialize timesteps
1065
+ timesteps, t_start = self.__initialize_timesteps(self.denoising_steps, strength)
1066
+
1067
+ # VAE encode masked image
1068
+ masked_latents = self.__encode_image(masked_image)
1069
+ masked_latents = torch.cat([masked_latents] * 2)
1070
+
1071
+ # CLIP text encoder
1072
+ text_embeddings = self.__encode_prompt(prompt, negative_prompt)
1073
+
1074
+ # UNet denoiser
1075
+ latents = self.__denoise_latent(
1076
+ latents,
1077
+ text_embeddings,
1078
+ timesteps=timesteps,
1079
+ step_offset=t_start,
1080
+ mask=mask,
1081
+ masked_image_latents=masked_latents,
1082
+ )
1083
+
1084
+ # VAE decode latent
1085
+ images = self.__decode_latent(latents)
1086
+
1087
+ images = self.numpy_to_pil(images)
1088
+ return StableDiffusionPipelineOutput(images=images, nsfw_content_detected=None)
v0.19.2/stable_diffusion_tensorrt_txt2img.py ADDED
@@ -0,0 +1,928 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright 2023 The HuggingFace Inc. team.
3
+ # SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+
18
+ import gc
19
+ import os
20
+ from collections import OrderedDict
21
+ from copy import copy
22
+ from typing import List, Optional, Union
23
+
24
+ import numpy as np
25
+ import onnx
26
+ import onnx_graphsurgeon as gs
27
+ import tensorrt as trt
28
+ import torch
29
+ from huggingface_hub import snapshot_download
30
+ from onnx import shape_inference
31
+ from polygraphy import cuda
32
+ from polygraphy.backend.common import bytes_from_path
33
+ from polygraphy.backend.onnx.loader import fold_constants
34
+ from polygraphy.backend.trt import (
35
+ CreateConfig,
36
+ Profile,
37
+ engine_from_bytes,
38
+ engine_from_network,
39
+ network_from_onnx_path,
40
+ save_engine,
41
+ )
42
+ from polygraphy.backend.trt import util as trt_util
43
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
44
+
45
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
46
+ from diffusers.pipelines.stable_diffusion import (
47
+ StableDiffusionPipeline,
48
+ StableDiffusionPipelineOutput,
49
+ StableDiffusionSafetyChecker,
50
+ )
51
+ from diffusers.schedulers import DDIMScheduler
52
+ from diffusers.utils import DIFFUSERS_CACHE, logging
53
+
54
+
55
+ """
56
+ Installation instructions
57
+ python3 -m pip install --upgrade transformers diffusers>=0.16.0
58
+ python3 -m pip install --upgrade tensorrt>=8.6.1
59
+ python3 -m pip install --upgrade polygraphy>=0.47.0 onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com
60
+ python3 -m pip install onnxruntime
61
+ """
62
+
63
+ TRT_LOGGER = trt.Logger(trt.Logger.ERROR)
64
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
65
+
66
+ # Map of numpy dtype -> torch dtype
67
+ numpy_to_torch_dtype_dict = {
68
+ np.uint8: torch.uint8,
69
+ np.int8: torch.int8,
70
+ np.int16: torch.int16,
71
+ np.int32: torch.int32,
72
+ np.int64: torch.int64,
73
+ np.float16: torch.float16,
74
+ np.float32: torch.float32,
75
+ np.float64: torch.float64,
76
+ np.complex64: torch.complex64,
77
+ np.complex128: torch.complex128,
78
+ }
79
+ if np.version.full_version >= "1.24.0":
80
+ numpy_to_torch_dtype_dict[np.bool_] = torch.bool
81
+ else:
82
+ numpy_to_torch_dtype_dict[np.bool] = torch.bool
83
+
84
+ # Map of torch dtype -> numpy dtype
85
+ torch_to_numpy_dtype_dict = {value: key for (key, value) in numpy_to_torch_dtype_dict.items()}
86
+
87
+
88
+ def device_view(t):
89
+ return cuda.DeviceView(ptr=t.data_ptr(), shape=t.shape, dtype=torch_to_numpy_dtype_dict[t.dtype])
90
+
91
+
92
+ class Engine:
93
+ def __init__(self, engine_path):
94
+ self.engine_path = engine_path
95
+ self.engine = None
96
+ self.context = None
97
+ self.buffers = OrderedDict()
98
+ self.tensors = OrderedDict()
99
+
100
+ def __del__(self):
101
+ [buf.free() for buf in self.buffers.values() if isinstance(buf, cuda.DeviceArray)]
102
+ del self.engine
103
+ del self.context
104
+ del self.buffers
105
+ del self.tensors
106
+
107
+ def build(
108
+ self,
109
+ onnx_path,
110
+ fp16,
111
+ input_profile=None,
112
+ enable_preview=False,
113
+ enable_all_tactics=False,
114
+ timing_cache=None,
115
+ workspace_size=0,
116
+ ):
117
+ logger.warning(f"Building TensorRT engine for {onnx_path}: {self.engine_path}")
118
+ p = Profile()
119
+ if input_profile:
120
+ for name, dims in input_profile.items():
121
+ assert len(dims) == 3
122
+ p.add(name, min=dims[0], opt=dims[1], max=dims[2])
123
+
124
+ config_kwargs = {}
125
+
126
+ config_kwargs["preview_features"] = [trt.PreviewFeature.DISABLE_EXTERNAL_TACTIC_SOURCES_FOR_CORE_0805]
127
+ if enable_preview:
128
+ # Faster dynamic shapes made optional since it increases engine build time.
129
+ config_kwargs["preview_features"].append(trt.PreviewFeature.FASTER_DYNAMIC_SHAPES_0805)
130
+ if workspace_size > 0:
131
+ config_kwargs["memory_pool_limits"] = {trt.MemoryPoolType.WORKSPACE: workspace_size}
132
+ if not enable_all_tactics:
133
+ config_kwargs["tactic_sources"] = []
134
+
135
+ engine = engine_from_network(
136
+ network_from_onnx_path(onnx_path, flags=[trt.OnnxParserFlag.NATIVE_INSTANCENORM]),
137
+ config=CreateConfig(fp16=fp16, profiles=[p], load_timing_cache=timing_cache, **config_kwargs),
138
+ save_timing_cache=timing_cache,
139
+ )
140
+ save_engine(engine, path=self.engine_path)
141
+
142
+ def load(self):
143
+ logger.warning(f"Loading TensorRT engine: {self.engine_path}")
144
+ self.engine = engine_from_bytes(bytes_from_path(self.engine_path))
145
+
146
+ def activate(self):
147
+ self.context = self.engine.create_execution_context()
148
+
149
+ def allocate_buffers(self, shape_dict=None, device="cuda"):
150
+ for idx in range(trt_util.get_bindings_per_profile(self.engine)):
151
+ binding = self.engine[idx]
152
+ if shape_dict and binding in shape_dict:
153
+ shape = shape_dict[binding]
154
+ else:
155
+ shape = self.engine.get_binding_shape(binding)
156
+ dtype = trt.nptype(self.engine.get_binding_dtype(binding))
157
+ if self.engine.binding_is_input(binding):
158
+ self.context.set_binding_shape(idx, shape)
159
+ tensor = torch.empty(tuple(shape), dtype=numpy_to_torch_dtype_dict[dtype]).to(device=device)
160
+ self.tensors[binding] = tensor
161
+ self.buffers[binding] = cuda.DeviceView(ptr=tensor.data_ptr(), shape=shape, dtype=dtype)
162
+
163
+ def infer(self, feed_dict, stream):
164
+ start_binding, end_binding = trt_util.get_active_profile_bindings(self.context)
165
+ # shallow copy of ordered dict
166
+ device_buffers = copy(self.buffers)
167
+ for name, buf in feed_dict.items():
168
+ assert isinstance(buf, cuda.DeviceView)
169
+ device_buffers[name] = buf
170
+ bindings = [0] * start_binding + [buf.ptr for buf in device_buffers.values()]
171
+ noerror = self.context.execute_async_v2(bindings=bindings, stream_handle=stream.ptr)
172
+ if not noerror:
173
+ raise ValueError("ERROR: inference failed.")
174
+
175
+ return self.tensors
176
+
177
+
178
+ class Optimizer:
179
+ def __init__(self, onnx_graph):
180
+ self.graph = gs.import_onnx(onnx_graph)
181
+
182
+ def cleanup(self, return_onnx=False):
183
+ self.graph.cleanup().toposort()
184
+ if return_onnx:
185
+ return gs.export_onnx(self.graph)
186
+
187
+ def select_outputs(self, keep, names=None):
188
+ self.graph.outputs = [self.graph.outputs[o] for o in keep]
189
+ if names:
190
+ for i, name in enumerate(names):
191
+ self.graph.outputs[i].name = name
192
+
193
+ def fold_constants(self, return_onnx=False):
194
+ onnx_graph = fold_constants(gs.export_onnx(self.graph), allow_onnxruntime_shape_inference=True)
195
+ self.graph = gs.import_onnx(onnx_graph)
196
+ if return_onnx:
197
+ return onnx_graph
198
+
199
+ def infer_shapes(self, return_onnx=False):
200
+ onnx_graph = gs.export_onnx(self.graph)
201
+ if onnx_graph.ByteSize() > 2147483648:
202
+ raise TypeError("ERROR: model size exceeds supported 2GB limit")
203
+ else:
204
+ onnx_graph = shape_inference.infer_shapes(onnx_graph)
205
+
206
+ self.graph = gs.import_onnx(onnx_graph)
207
+ if return_onnx:
208
+ return onnx_graph
209
+
210
+
211
+ class BaseModel:
212
+ def __init__(self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77):
213
+ self.model = model
214
+ self.name = "SD Model"
215
+ self.fp16 = fp16
216
+ self.device = device
217
+
218
+ self.min_batch = 1
219
+ self.max_batch = max_batch_size
220
+ self.min_image_shape = 256 # min image resolution: 256x256
221
+ self.max_image_shape = 1024 # max image resolution: 1024x1024
222
+ self.min_latent_shape = self.min_image_shape // 8
223
+ self.max_latent_shape = self.max_image_shape // 8
224
+
225
+ self.embedding_dim = embedding_dim
226
+ self.text_maxlen = text_maxlen
227
+
228
+ def get_model(self):
229
+ return self.model
230
+
231
+ def get_input_names(self):
232
+ pass
233
+
234
+ def get_output_names(self):
235
+ pass
236
+
237
+ def get_dynamic_axes(self):
238
+ return None
239
+
240
+ def get_sample_input(self, batch_size, image_height, image_width):
241
+ pass
242
+
243
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
244
+ return None
245
+
246
+ def get_shape_dict(self, batch_size, image_height, image_width):
247
+ return None
248
+
249
+ def optimize(self, onnx_graph):
250
+ opt = Optimizer(onnx_graph)
251
+ opt.cleanup()
252
+ opt.fold_constants()
253
+ opt.infer_shapes()
254
+ onnx_opt_graph = opt.cleanup(return_onnx=True)
255
+ return onnx_opt_graph
256
+
257
+ def check_dims(self, batch_size, image_height, image_width):
258
+ assert batch_size >= self.min_batch and batch_size <= self.max_batch
259
+ assert image_height % 8 == 0 or image_width % 8 == 0
260
+ latent_height = image_height // 8
261
+ latent_width = image_width // 8
262
+ assert latent_height >= self.min_latent_shape and latent_height <= self.max_latent_shape
263
+ assert latent_width >= self.min_latent_shape and latent_width <= self.max_latent_shape
264
+ return (latent_height, latent_width)
265
+
266
+ def get_minmax_dims(self, batch_size, image_height, image_width, static_batch, static_shape):
267
+ min_batch = batch_size if static_batch else self.min_batch
268
+ max_batch = batch_size if static_batch else self.max_batch
269
+ latent_height = image_height // 8
270
+ latent_width = image_width // 8
271
+ min_image_height = image_height if static_shape else self.min_image_shape
272
+ max_image_height = image_height if static_shape else self.max_image_shape
273
+ min_image_width = image_width if static_shape else self.min_image_shape
274
+ max_image_width = image_width if static_shape else self.max_image_shape
275
+ min_latent_height = latent_height if static_shape else self.min_latent_shape
276
+ max_latent_height = latent_height if static_shape else self.max_latent_shape
277
+ min_latent_width = latent_width if static_shape else self.min_latent_shape
278
+ max_latent_width = latent_width if static_shape else self.max_latent_shape
279
+ return (
280
+ min_batch,
281
+ max_batch,
282
+ min_image_height,
283
+ max_image_height,
284
+ min_image_width,
285
+ max_image_width,
286
+ min_latent_height,
287
+ max_latent_height,
288
+ min_latent_width,
289
+ max_latent_width,
290
+ )
291
+
292
+
293
+ def getOnnxPath(model_name, onnx_dir, opt=True):
294
+ return os.path.join(onnx_dir, model_name + (".opt" if opt else "") + ".onnx")
295
+
296
+
297
+ def getEnginePath(model_name, engine_dir):
298
+ return os.path.join(engine_dir, model_name + ".plan")
299
+
300
+
301
+ def build_engines(
302
+ models: dict,
303
+ engine_dir,
304
+ onnx_dir,
305
+ onnx_opset,
306
+ opt_image_height,
307
+ opt_image_width,
308
+ opt_batch_size=1,
309
+ force_engine_rebuild=False,
310
+ static_batch=False,
311
+ static_shape=True,
312
+ enable_preview=False,
313
+ enable_all_tactics=False,
314
+ timing_cache=None,
315
+ max_workspace_size=0,
316
+ ):
317
+ built_engines = {}
318
+ if not os.path.isdir(onnx_dir):
319
+ os.makedirs(onnx_dir)
320
+ if not os.path.isdir(engine_dir):
321
+ os.makedirs(engine_dir)
322
+
323
+ # Export models to ONNX
324
+ for model_name, model_obj in models.items():
325
+ engine_path = getEnginePath(model_name, engine_dir)
326
+ if force_engine_rebuild or not os.path.exists(engine_path):
327
+ logger.warning("Building Engines...")
328
+ logger.warning("Engine build can take a while to complete")
329
+ onnx_path = getOnnxPath(model_name, onnx_dir, opt=False)
330
+ onnx_opt_path = getOnnxPath(model_name, onnx_dir)
331
+ if force_engine_rebuild or not os.path.exists(onnx_opt_path):
332
+ if force_engine_rebuild or not os.path.exists(onnx_path):
333
+ logger.warning(f"Exporting model: {onnx_path}")
334
+ model = model_obj.get_model()
335
+ with torch.inference_mode(), torch.autocast("cuda"):
336
+ inputs = model_obj.get_sample_input(opt_batch_size, opt_image_height, opt_image_width)
337
+ torch.onnx.export(
338
+ model,
339
+ inputs,
340
+ onnx_path,
341
+ export_params=True,
342
+ opset_version=onnx_opset,
343
+ do_constant_folding=True,
344
+ input_names=model_obj.get_input_names(),
345
+ output_names=model_obj.get_output_names(),
346
+ dynamic_axes=model_obj.get_dynamic_axes(),
347
+ )
348
+ del model
349
+ torch.cuda.empty_cache()
350
+ gc.collect()
351
+ else:
352
+ logger.warning(f"Found cached model: {onnx_path}")
353
+
354
+ # Optimize onnx
355
+ if force_engine_rebuild or not os.path.exists(onnx_opt_path):
356
+ logger.warning(f"Generating optimizing model: {onnx_opt_path}")
357
+ onnx_opt_graph = model_obj.optimize(onnx.load(onnx_path))
358
+ onnx.save(onnx_opt_graph, onnx_opt_path)
359
+ else:
360
+ logger.warning(f"Found cached optimized model: {onnx_opt_path} ")
361
+
362
+ # Build TensorRT engines
363
+ for model_name, model_obj in models.items():
364
+ engine_path = getEnginePath(model_name, engine_dir)
365
+ engine = Engine(engine_path)
366
+ onnx_path = getOnnxPath(model_name, onnx_dir, opt=False)
367
+ onnx_opt_path = getOnnxPath(model_name, onnx_dir)
368
+
369
+ if force_engine_rebuild or not os.path.exists(engine.engine_path):
370
+ engine.build(
371
+ onnx_opt_path,
372
+ fp16=True,
373
+ input_profile=model_obj.get_input_profile(
374
+ opt_batch_size,
375
+ opt_image_height,
376
+ opt_image_width,
377
+ static_batch=static_batch,
378
+ static_shape=static_shape,
379
+ ),
380
+ enable_preview=enable_preview,
381
+ timing_cache=timing_cache,
382
+ workspace_size=max_workspace_size,
383
+ )
384
+ built_engines[model_name] = engine
385
+
386
+ # Load and activate TensorRT engines
387
+ for model_name, model_obj in models.items():
388
+ engine = built_engines[model_name]
389
+ engine.load()
390
+ engine.activate()
391
+
392
+ return built_engines
393
+
394
+
395
+ def runEngine(engine, feed_dict, stream):
396
+ return engine.infer(feed_dict, stream)
397
+
398
+
399
+ class CLIP(BaseModel):
400
+ def __init__(self, model, device, max_batch_size, embedding_dim):
401
+ super(CLIP, self).__init__(
402
+ model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
403
+ )
404
+ self.name = "CLIP"
405
+
406
+ def get_input_names(self):
407
+ return ["input_ids"]
408
+
409
+ def get_output_names(self):
410
+ return ["text_embeddings", "pooler_output"]
411
+
412
+ def get_dynamic_axes(self):
413
+ return {"input_ids": {0: "B"}, "text_embeddings": {0: "B"}}
414
+
415
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
416
+ self.check_dims(batch_size, image_height, image_width)
417
+ min_batch, max_batch, _, _, _, _, _, _, _, _ = self.get_minmax_dims(
418
+ batch_size, image_height, image_width, static_batch, static_shape
419
+ )
420
+ return {
421
+ "input_ids": [(min_batch, self.text_maxlen), (batch_size, self.text_maxlen), (max_batch, self.text_maxlen)]
422
+ }
423
+
424
+ def get_shape_dict(self, batch_size, image_height, image_width):
425
+ self.check_dims(batch_size, image_height, image_width)
426
+ return {
427
+ "input_ids": (batch_size, self.text_maxlen),
428
+ "text_embeddings": (batch_size, self.text_maxlen, self.embedding_dim),
429
+ }
430
+
431
+ def get_sample_input(self, batch_size, image_height, image_width):
432
+ self.check_dims(batch_size, image_height, image_width)
433
+ return torch.zeros(batch_size, self.text_maxlen, dtype=torch.int32, device=self.device)
434
+
435
+ def optimize(self, onnx_graph):
436
+ opt = Optimizer(onnx_graph)
437
+ opt.select_outputs([0]) # delete graph output#1
438
+ opt.cleanup()
439
+ opt.fold_constants()
440
+ opt.infer_shapes()
441
+ opt.select_outputs([0], names=["text_embeddings"]) # rename network output
442
+ opt_onnx_graph = opt.cleanup(return_onnx=True)
443
+ return opt_onnx_graph
444
+
445
+
446
+ def make_CLIP(model, device, max_batch_size, embedding_dim, inpaint=False):
447
+ return CLIP(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
448
+
449
+
450
+ class UNet(BaseModel):
451
+ def __init__(
452
+ self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77, unet_dim=4
453
+ ):
454
+ super(UNet, self).__init__(
455
+ model=model,
456
+ fp16=fp16,
457
+ device=device,
458
+ max_batch_size=max_batch_size,
459
+ embedding_dim=embedding_dim,
460
+ text_maxlen=text_maxlen,
461
+ )
462
+ self.unet_dim = unet_dim
463
+ self.name = "UNet"
464
+
465
+ def get_input_names(self):
466
+ return ["sample", "timestep", "encoder_hidden_states"]
467
+
468
+ def get_output_names(self):
469
+ return ["latent"]
470
+
471
+ def get_dynamic_axes(self):
472
+ return {
473
+ "sample": {0: "2B", 2: "H", 3: "W"},
474
+ "encoder_hidden_states": {0: "2B"},
475
+ "latent": {0: "2B", 2: "H", 3: "W"},
476
+ }
477
+
478
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
479
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
480
+ (
481
+ min_batch,
482
+ max_batch,
483
+ _,
484
+ _,
485
+ _,
486
+ _,
487
+ min_latent_height,
488
+ max_latent_height,
489
+ min_latent_width,
490
+ max_latent_width,
491
+ ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
492
+ return {
493
+ "sample": [
494
+ (2 * min_batch, self.unet_dim, min_latent_height, min_latent_width),
495
+ (2 * batch_size, self.unet_dim, latent_height, latent_width),
496
+ (2 * max_batch, self.unet_dim, max_latent_height, max_latent_width),
497
+ ],
498
+ "encoder_hidden_states": [
499
+ (2 * min_batch, self.text_maxlen, self.embedding_dim),
500
+ (2 * batch_size, self.text_maxlen, self.embedding_dim),
501
+ (2 * max_batch, self.text_maxlen, self.embedding_dim),
502
+ ],
503
+ }
504
+
505
+ def get_shape_dict(self, batch_size, image_height, image_width):
506
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
507
+ return {
508
+ "sample": (2 * batch_size, self.unet_dim, latent_height, latent_width),
509
+ "encoder_hidden_states": (2 * batch_size, self.text_maxlen, self.embedding_dim),
510
+ "latent": (2 * batch_size, 4, latent_height, latent_width),
511
+ }
512
+
513
+ def get_sample_input(self, batch_size, image_height, image_width):
514
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
515
+ dtype = torch.float16 if self.fp16 else torch.float32
516
+ return (
517
+ torch.randn(
518
+ 2 * batch_size, self.unet_dim, latent_height, latent_width, dtype=torch.float32, device=self.device
519
+ ),
520
+ torch.tensor([1.0], dtype=torch.float32, device=self.device),
521
+ torch.randn(2 * batch_size, self.text_maxlen, self.embedding_dim, dtype=dtype, device=self.device),
522
+ )
523
+
524
+
525
+ def make_UNet(model, device, max_batch_size, embedding_dim, inpaint=False):
526
+ return UNet(
527
+ model,
528
+ fp16=True,
529
+ device=device,
530
+ max_batch_size=max_batch_size,
531
+ embedding_dim=embedding_dim,
532
+ unet_dim=(9 if inpaint else 4),
533
+ )
534
+
535
+
536
+ class VAE(BaseModel):
537
+ def __init__(self, model, device, max_batch_size, embedding_dim):
538
+ super(VAE, self).__init__(
539
+ model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
540
+ )
541
+ self.name = "VAE decoder"
542
+
543
+ def get_input_names(self):
544
+ return ["latent"]
545
+
546
+ def get_output_names(self):
547
+ return ["images"]
548
+
549
+ def get_dynamic_axes(self):
550
+ return {"latent": {0: "B", 2: "H", 3: "W"}, "images": {0: "B", 2: "8H", 3: "8W"}}
551
+
552
+ def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
553
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
554
+ (
555
+ min_batch,
556
+ max_batch,
557
+ _,
558
+ _,
559
+ _,
560
+ _,
561
+ min_latent_height,
562
+ max_latent_height,
563
+ min_latent_width,
564
+ max_latent_width,
565
+ ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
566
+ return {
567
+ "latent": [
568
+ (min_batch, 4, min_latent_height, min_latent_width),
569
+ (batch_size, 4, latent_height, latent_width),
570
+ (max_batch, 4, max_latent_height, max_latent_width),
571
+ ]
572
+ }
573
+
574
+ def get_shape_dict(self, batch_size, image_height, image_width):
575
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
576
+ return {
577
+ "latent": (batch_size, 4, latent_height, latent_width),
578
+ "images": (batch_size, 3, image_height, image_width),
579
+ }
580
+
581
+ def get_sample_input(self, batch_size, image_height, image_width):
582
+ latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
583
+ return torch.randn(batch_size, 4, latent_height, latent_width, dtype=torch.float32, device=self.device)
584
+
585
+
586
+ def make_VAE(model, device, max_batch_size, embedding_dim, inpaint=False):
587
+ return VAE(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
588
+
589
+
590
+ class TensorRTStableDiffusionPipeline(StableDiffusionPipeline):
591
+ r"""
592
+ Pipeline for text-to-image generation using TensorRT accelerated Stable Diffusion.
593
+
594
+ This model inherits from [`StableDiffusionPipeline`]. Check the superclass documentation for the generic methods the
595
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
596
+
597
+ Args:
598
+ vae ([`AutoencoderKL`]):
599
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
600
+ text_encoder ([`CLIPTextModel`]):
601
+ Frozen text-encoder. Stable Diffusion uses the text portion of
602
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
603
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
604
+ tokenizer (`CLIPTokenizer`):
605
+ Tokenizer of class
606
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
607
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
608
+ scheduler ([`SchedulerMixin`]):
609
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
610
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
611
+ safety_checker ([`StableDiffusionSafetyChecker`]):
612
+ Classification module that estimates whether generated images could be considered offensive or harmful.
613
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
614
+ feature_extractor ([`CLIPFeatureExtractor`]):
615
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
616
+ """
617
+
618
+ def __init__(
619
+ self,
620
+ vae: AutoencoderKL,
621
+ text_encoder: CLIPTextModel,
622
+ tokenizer: CLIPTokenizer,
623
+ unet: UNet2DConditionModel,
624
+ scheduler: DDIMScheduler,
625
+ safety_checker: StableDiffusionSafetyChecker,
626
+ feature_extractor: CLIPFeatureExtractor,
627
+ requires_safety_checker: bool = True,
628
+ stages=["clip", "unet", "vae"],
629
+ image_height: int = 768,
630
+ image_width: int = 768,
631
+ max_batch_size: int = 16,
632
+ # ONNX export parameters
633
+ onnx_opset: int = 17,
634
+ onnx_dir: str = "onnx",
635
+ # TensorRT engine build parameters
636
+ engine_dir: str = "engine",
637
+ build_preview_features: bool = True,
638
+ force_engine_rebuild: bool = False,
639
+ timing_cache: str = "timing_cache",
640
+ ):
641
+ super().__init__(
642
+ vae, text_encoder, tokenizer, unet, scheduler, safety_checker, feature_extractor, requires_safety_checker
643
+ )
644
+
645
+ self.vae.forward = self.vae.decode
646
+
647
+ self.stages = stages
648
+ self.image_height, self.image_width = image_height, image_width
649
+ self.inpaint = False
650
+ self.onnx_opset = onnx_opset
651
+ self.onnx_dir = onnx_dir
652
+ self.engine_dir = engine_dir
653
+ self.force_engine_rebuild = force_engine_rebuild
654
+ self.timing_cache = timing_cache
655
+ self.build_static_batch = False
656
+ self.build_dynamic_shape = False
657
+ self.build_preview_features = build_preview_features
658
+
659
+ self.max_batch_size = max_batch_size
660
+ # TODO: Restrict batch size to 4 for larger image dimensions as a WAR for TensorRT limitation.
661
+ if self.build_dynamic_shape or self.image_height > 512 or self.image_width > 512:
662
+ self.max_batch_size = 4
663
+
664
+ self.stream = None # loaded in loadResources()
665
+ self.models = {} # loaded in __loadModels()
666
+ self.engine = {} # loaded in build_engines()
667
+
668
+ def __loadModels(self):
669
+ # Load pipeline models
670
+ self.embedding_dim = self.text_encoder.config.hidden_size
671
+ models_args = {
672
+ "device": self.torch_device,
673
+ "max_batch_size": self.max_batch_size,
674
+ "embedding_dim": self.embedding_dim,
675
+ "inpaint": self.inpaint,
676
+ }
677
+ if "clip" in self.stages:
678
+ self.models["clip"] = make_CLIP(self.text_encoder, **models_args)
679
+ if "unet" in self.stages:
680
+ self.models["unet"] = make_UNet(self.unet, **models_args)
681
+ if "vae" in self.stages:
682
+ self.models["vae"] = make_VAE(self.vae, **models_args)
683
+
684
+ @classmethod
685
+ def set_cached_folder(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs):
686
+ cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
687
+ resume_download = kwargs.pop("resume_download", False)
688
+ proxies = kwargs.pop("proxies", None)
689
+ local_files_only = kwargs.pop("local_files_only", False)
690
+ use_auth_token = kwargs.pop("use_auth_token", None)
691
+ revision = kwargs.pop("revision", None)
692
+
693
+ cls.cached_folder = (
694
+ pretrained_model_name_or_path
695
+ if os.path.isdir(pretrained_model_name_or_path)
696
+ else snapshot_download(
697
+ pretrained_model_name_or_path,
698
+ cache_dir=cache_dir,
699
+ resume_download=resume_download,
700
+ proxies=proxies,
701
+ local_files_only=local_files_only,
702
+ use_auth_token=use_auth_token,
703
+ revision=revision,
704
+ )
705
+ )
706
+
707
+ def to(self, torch_device: Optional[Union[str, torch.device]] = None, silence_dtype_warnings: bool = False):
708
+ super().to(torch_device, silence_dtype_warnings=silence_dtype_warnings)
709
+
710
+ self.onnx_dir = os.path.join(self.cached_folder, self.onnx_dir)
711
+ self.engine_dir = os.path.join(self.cached_folder, self.engine_dir)
712
+ self.timing_cache = os.path.join(self.cached_folder, self.timing_cache)
713
+
714
+ # set device
715
+ self.torch_device = self._execution_device
716
+ logger.warning(f"Running inference on device: {self.torch_device}")
717
+
718
+ # load models
719
+ self.__loadModels()
720
+
721
+ # build engines
722
+ self.engine = build_engines(
723
+ self.models,
724
+ self.engine_dir,
725
+ self.onnx_dir,
726
+ self.onnx_opset,
727
+ opt_image_height=self.image_height,
728
+ opt_image_width=self.image_width,
729
+ force_engine_rebuild=self.force_engine_rebuild,
730
+ static_batch=self.build_static_batch,
731
+ static_shape=not self.build_dynamic_shape,
732
+ enable_preview=self.build_preview_features,
733
+ timing_cache=self.timing_cache,
734
+ )
735
+
736
+ return self
737
+
738
+ def __encode_prompt(self, prompt, negative_prompt):
739
+ r"""
740
+ Encodes the prompt into text encoder hidden states.
741
+
742
+ Args:
743
+ prompt (`str` or `List[str]`, *optional*):
744
+ prompt to be encoded
745
+ negative_prompt (`str` or `List[str]`, *optional*):
746
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
747
+ `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
748
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
749
+ """
750
+ # Tokenize prompt
751
+ text_input_ids = (
752
+ self.tokenizer(
753
+ prompt,
754
+ padding="max_length",
755
+ max_length=self.tokenizer.model_max_length,
756
+ truncation=True,
757
+ return_tensors="pt",
758
+ )
759
+ .input_ids.type(torch.int32)
760
+ .to(self.torch_device)
761
+ )
762
+
763
+ text_input_ids_inp = device_view(text_input_ids)
764
+ # NOTE: output tensor for CLIP must be cloned because it will be overwritten when called again for negative prompt
765
+ text_embeddings = runEngine(self.engine["clip"], {"input_ids": text_input_ids_inp}, self.stream)[
766
+ "text_embeddings"
767
+ ].clone()
768
+
769
+ # Tokenize negative prompt
770
+ uncond_input_ids = (
771
+ self.tokenizer(
772
+ negative_prompt,
773
+ padding="max_length",
774
+ max_length=self.tokenizer.model_max_length,
775
+ truncation=True,
776
+ return_tensors="pt",
777
+ )
778
+ .input_ids.type(torch.int32)
779
+ .to(self.torch_device)
780
+ )
781
+ uncond_input_ids_inp = device_view(uncond_input_ids)
782
+ uncond_embeddings = runEngine(self.engine["clip"], {"input_ids": uncond_input_ids_inp}, self.stream)[
783
+ "text_embeddings"
784
+ ]
785
+
786
+ # Concatenate the unconditional and text embeddings into a single batch to avoid doing two forward passes for classifier free guidance
787
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings]).to(dtype=torch.float16)
788
+
789
+ return text_embeddings
790
+
791
+ def __denoise_latent(
792
+ self, latents, text_embeddings, timesteps=None, step_offset=0, mask=None, masked_image_latents=None
793
+ ):
794
+ if not isinstance(timesteps, torch.Tensor):
795
+ timesteps = self.scheduler.timesteps
796
+ for step_index, timestep in enumerate(timesteps):
797
+ # Expand the latents if we are doing classifier free guidance
798
+ latent_model_input = torch.cat([latents] * 2)
799
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, timestep)
800
+ if isinstance(mask, torch.Tensor):
801
+ latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)
802
+
803
+ # Predict the noise residual
804
+ timestep_float = timestep.float() if timestep.dtype != torch.float32 else timestep
805
+
806
+ sample_inp = device_view(latent_model_input)
807
+ timestep_inp = device_view(timestep_float)
808
+ embeddings_inp = device_view(text_embeddings)
809
+ noise_pred = runEngine(
810
+ self.engine["unet"],
811
+ {"sample": sample_inp, "timestep": timestep_inp, "encoder_hidden_states": embeddings_inp},
812
+ self.stream,
813
+ )["latent"]
814
+
815
+ # Perform guidance
816
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
817
+ noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond)
818
+
819
+ latents = self.scheduler.step(noise_pred, timestep, latents).prev_sample
820
+
821
+ latents = 1.0 / 0.18215 * latents
822
+ return latents
823
+
824
+ def __decode_latent(self, latents):
825
+ images = runEngine(self.engine["vae"], {"latent": device_view(latents)}, self.stream)["images"]
826
+ images = (images / 2 + 0.5).clamp(0, 1)
827
+ return images.cpu().permute(0, 2, 3, 1).float().numpy()
828
+
829
+ def __loadResources(self, image_height, image_width, batch_size):
830
+ self.stream = cuda.Stream()
831
+
832
+ # Allocate buffers for TensorRT engine bindings
833
+ for model_name, obj in self.models.items():
834
+ self.engine[model_name].allocate_buffers(
835
+ shape_dict=obj.get_shape_dict(batch_size, image_height, image_width), device=self.torch_device
836
+ )
837
+
838
+ @torch.no_grad()
839
+ def __call__(
840
+ self,
841
+ prompt: Union[str, List[str]] = None,
842
+ num_inference_steps: int = 50,
843
+ guidance_scale: float = 7.5,
844
+ negative_prompt: Optional[Union[str, List[str]]] = None,
845
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
846
+ ):
847
+ r"""
848
+ Function invoked when calling the pipeline for generation.
849
+
850
+ Args:
851
+ prompt (`str` or `List[str]`, *optional*):
852
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
853
+ instead.
854
+ num_inference_steps (`int`, *optional*, defaults to 50):
855
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
856
+ expense of slower inference.
857
+ guidance_scale (`float`, *optional*, defaults to 7.5):
858
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
859
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
860
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
861
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
862
+ usually at the expense of lower image quality.
863
+ negative_prompt (`str` or `List[str]`, *optional*):
864
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
865
+ `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
866
+ Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
867
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
868
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
869
+ to make generation deterministic.
870
+
871
+ """
872
+ self.generator = generator
873
+ self.denoising_steps = num_inference_steps
874
+ self.guidance_scale = guidance_scale
875
+
876
+ # Pre-compute latent input scales and linear multistep coefficients
877
+ self.scheduler.set_timesteps(self.denoising_steps, device=self.torch_device)
878
+
879
+ # Define call parameters
880
+ if prompt is not None and isinstance(prompt, str):
881
+ batch_size = 1
882
+ prompt = [prompt]
883
+ elif prompt is not None and isinstance(prompt, list):
884
+ batch_size = len(prompt)
885
+ else:
886
+ raise ValueError(f"Expected prompt to be of type list or str but got {type(prompt)}")
887
+
888
+ if negative_prompt is None:
889
+ negative_prompt = [""] * batch_size
890
+
891
+ if negative_prompt is not None and isinstance(negative_prompt, str):
892
+ negative_prompt = [negative_prompt]
893
+
894
+ assert len(prompt) == len(negative_prompt)
895
+
896
+ if batch_size > self.max_batch_size:
897
+ raise ValueError(
898
+ f"Batch size {len(prompt)} is larger than allowed {self.max_batch_size}. If dynamic shape is used, then maximum batch size is 4"
899
+ )
900
+
901
+ # load resources
902
+ self.__loadResources(self.image_height, self.image_width, batch_size)
903
+
904
+ with torch.inference_mode(), torch.autocast("cuda"), trt.Runtime(TRT_LOGGER):
905
+ # CLIP text encoder
906
+ text_embeddings = self.__encode_prompt(prompt, negative_prompt)
907
+
908
+ # Pre-initialize latents
909
+ num_channels_latents = self.unet.in_channels
910
+ latents = self.prepare_latents(
911
+ batch_size,
912
+ num_channels_latents,
913
+ self.image_height,
914
+ self.image_width,
915
+ torch.float32,
916
+ self.torch_device,
917
+ generator,
918
+ )
919
+
920
+ # UNet denoiser
921
+ latents = self.__denoise_latent(latents, text_embeddings)
922
+
923
+ # VAE decode latent
924
+ images = self.__decode_latent(latents)
925
+
926
+ images, has_nsfw_concept = self.run_safety_checker(images, self.torch_device, text_embeddings.dtype)
927
+ images = self.numpy_to_pil(images)
928
+ return StableDiffusionPipelineOutput(images=images, nsfw_content_detected=has_nsfw_concept)
v0.19.2/stable_unclip.py ADDED
@@ -0,0 +1,287 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import types
2
+ from typing import List, Optional, Tuple, Union
3
+
4
+ import torch
5
+ from transformers import CLIPTextModelWithProjection, CLIPTokenizer
6
+ from transformers.models.clip.modeling_clip import CLIPTextModelOutput
7
+
8
+ from diffusers.models import PriorTransformer
9
+ from diffusers.pipelines import DiffusionPipeline, StableDiffusionImageVariationPipeline
10
+ from diffusers.schedulers import UnCLIPScheduler
11
+ from diffusers.utils import logging, randn_tensor
12
+
13
+
14
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
15
+
16
+
17
+ def _encode_image(self, image, device, num_images_per_prompt, do_classifier_free_guidance):
18
+ image = image.to(device=device)
19
+ image_embeddings = image # take image as image_embeddings
20
+ image_embeddings = image_embeddings.unsqueeze(1)
21
+
22
+ # duplicate image embeddings for each generation per prompt, using mps friendly method
23
+ bs_embed, seq_len, _ = image_embeddings.shape
24
+ image_embeddings = image_embeddings.repeat(1, num_images_per_prompt, 1)
25
+ image_embeddings = image_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
26
+
27
+ if do_classifier_free_guidance:
28
+ uncond_embeddings = torch.zeros_like(image_embeddings)
29
+
30
+ # For classifier free guidance, we need to do two forward passes.
31
+ # Here we concatenate the unconditional and text embeddings into a single batch
32
+ # to avoid doing two forward passes
33
+ image_embeddings = torch.cat([uncond_embeddings, image_embeddings])
34
+
35
+ return image_embeddings
36
+
37
+
38
+ class StableUnCLIPPipeline(DiffusionPipeline):
39
+ def __init__(
40
+ self,
41
+ prior: PriorTransformer,
42
+ tokenizer: CLIPTokenizer,
43
+ text_encoder: CLIPTextModelWithProjection,
44
+ prior_scheduler: UnCLIPScheduler,
45
+ decoder_pipe_kwargs: Optional[dict] = None,
46
+ ):
47
+ super().__init__()
48
+
49
+ decoder_pipe_kwargs = {"image_encoder": None} if decoder_pipe_kwargs is None else decoder_pipe_kwargs
50
+
51
+ decoder_pipe_kwargs["torch_dtype"] = decoder_pipe_kwargs.get("torch_dtype", None) or prior.dtype
52
+
53
+ self.decoder_pipe = StableDiffusionImageVariationPipeline.from_pretrained(
54
+ "lambdalabs/sd-image-variations-diffusers", **decoder_pipe_kwargs
55
+ )
56
+
57
+ # replace `_encode_image` method
58
+ self.decoder_pipe._encode_image = types.MethodType(_encode_image, self.decoder_pipe)
59
+
60
+ self.register_modules(
61
+ prior=prior,
62
+ tokenizer=tokenizer,
63
+ text_encoder=text_encoder,
64
+ prior_scheduler=prior_scheduler,
65
+ )
66
+
67
+ def _encode_prompt(
68
+ self,
69
+ prompt,
70
+ device,
71
+ num_images_per_prompt,
72
+ do_classifier_free_guidance,
73
+ text_model_output: Optional[Union[CLIPTextModelOutput, Tuple]] = None,
74
+ text_attention_mask: Optional[torch.Tensor] = None,
75
+ ):
76
+ if text_model_output is None:
77
+ batch_size = len(prompt) if isinstance(prompt, list) else 1
78
+ # get prompt text embeddings
79
+ text_inputs = self.tokenizer(
80
+ prompt,
81
+ padding="max_length",
82
+ max_length=self.tokenizer.model_max_length,
83
+ return_tensors="pt",
84
+ )
85
+ text_input_ids = text_inputs.input_ids
86
+ text_mask = text_inputs.attention_mask.bool().to(device)
87
+
88
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
89
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
90
+ logger.warning(
91
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
92
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
93
+ )
94
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
95
+
96
+ text_encoder_output = self.text_encoder(text_input_ids.to(device))
97
+
98
+ text_embeddings = text_encoder_output.text_embeds
99
+ text_encoder_hidden_states = text_encoder_output.last_hidden_state
100
+
101
+ else:
102
+ batch_size = text_model_output[0].shape[0]
103
+ text_embeddings, text_encoder_hidden_states = text_model_output[0], text_model_output[1]
104
+ text_mask = text_attention_mask
105
+
106
+ text_embeddings = text_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
107
+ text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)
108
+ text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0)
109
+
110
+ if do_classifier_free_guidance:
111
+ uncond_tokens = [""] * batch_size
112
+
113
+ uncond_input = self.tokenizer(
114
+ uncond_tokens,
115
+ padding="max_length",
116
+ max_length=self.tokenizer.model_max_length,
117
+ truncation=True,
118
+ return_tensors="pt",
119
+ )
120
+ uncond_text_mask = uncond_input.attention_mask.bool().to(device)
121
+ uncond_embeddings_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device))
122
+
123
+ uncond_embeddings = uncond_embeddings_text_encoder_output.text_embeds
124
+ uncond_text_encoder_hidden_states = uncond_embeddings_text_encoder_output.last_hidden_state
125
+
126
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
127
+
128
+ seq_len = uncond_embeddings.shape[1]
129
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt)
130
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len)
131
+
132
+ seq_len = uncond_text_encoder_hidden_states.shape[1]
133
+ uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1)
134
+ uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view(
135
+ batch_size * num_images_per_prompt, seq_len, -1
136
+ )
137
+ uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0)
138
+
139
+ # done duplicates
140
+
141
+ # For classifier free guidance, we need to do two forward passes.
142
+ # Here we concatenate the unconditional and text embeddings into a single batch
143
+ # to avoid doing two forward passes
144
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
145
+ text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states])
146
+
147
+ text_mask = torch.cat([uncond_text_mask, text_mask])
148
+
149
+ return text_embeddings, text_encoder_hidden_states, text_mask
150
+
151
+ @property
152
+ def _execution_device(self):
153
+ r"""
154
+ Returns the device on which the pipeline's models will be executed. After calling
155
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
156
+ hooks.
157
+ """
158
+ if self.device != torch.device("meta") or not hasattr(self.prior, "_hf_hook"):
159
+ return self.device
160
+ for module in self.prior.modules():
161
+ if (
162
+ hasattr(module, "_hf_hook")
163
+ and hasattr(module._hf_hook, "execution_device")
164
+ and module._hf_hook.execution_device is not None
165
+ ):
166
+ return torch.device(module._hf_hook.execution_device)
167
+ return self.device
168
+
169
+ def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
170
+ if latents is None:
171
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
172
+ else:
173
+ if latents.shape != shape:
174
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
175
+ latents = latents.to(device)
176
+
177
+ latents = latents * scheduler.init_noise_sigma
178
+ return latents
179
+
180
+ def to(self, torch_device: Optional[Union[str, torch.device]] = None):
181
+ self.decoder_pipe.to(torch_device)
182
+ super().to(torch_device)
183
+
184
+ @torch.no_grad()
185
+ def __call__(
186
+ self,
187
+ prompt: Optional[Union[str, List[str]]] = None,
188
+ height: Optional[int] = None,
189
+ width: Optional[int] = None,
190
+ num_images_per_prompt: int = 1,
191
+ prior_num_inference_steps: int = 25,
192
+ generator: Optional[torch.Generator] = None,
193
+ prior_latents: Optional[torch.FloatTensor] = None,
194
+ text_model_output: Optional[Union[CLIPTextModelOutput, Tuple]] = None,
195
+ text_attention_mask: Optional[torch.Tensor] = None,
196
+ prior_guidance_scale: float = 4.0,
197
+ decoder_guidance_scale: float = 8.0,
198
+ decoder_num_inference_steps: int = 50,
199
+ decoder_num_images_per_prompt: Optional[int] = 1,
200
+ decoder_eta: float = 0.0,
201
+ output_type: Optional[str] = "pil",
202
+ return_dict: bool = True,
203
+ ):
204
+ if prompt is not None:
205
+ if isinstance(prompt, str):
206
+ batch_size = 1
207
+ elif isinstance(prompt, list):
208
+ batch_size = len(prompt)
209
+ else:
210
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
211
+ else:
212
+ batch_size = text_model_output[0].shape[0]
213
+
214
+ device = self._execution_device
215
+
216
+ batch_size = batch_size * num_images_per_prompt
217
+
218
+ do_classifier_free_guidance = prior_guidance_scale > 1.0 or decoder_guidance_scale > 1.0
219
+
220
+ text_embeddings, text_encoder_hidden_states, text_mask = self._encode_prompt(
221
+ prompt, device, num_images_per_prompt, do_classifier_free_guidance, text_model_output, text_attention_mask
222
+ )
223
+
224
+ # prior
225
+
226
+ self.prior_scheduler.set_timesteps(prior_num_inference_steps, device=device)
227
+ prior_timesteps_tensor = self.prior_scheduler.timesteps
228
+
229
+ embedding_dim = self.prior.config.embedding_dim
230
+
231
+ prior_latents = self.prepare_latents(
232
+ (batch_size, embedding_dim),
233
+ text_embeddings.dtype,
234
+ device,
235
+ generator,
236
+ prior_latents,
237
+ self.prior_scheduler,
238
+ )
239
+
240
+ for i, t in enumerate(self.progress_bar(prior_timesteps_tensor)):
241
+ # expand the latents if we are doing classifier free guidance
242
+ latent_model_input = torch.cat([prior_latents] * 2) if do_classifier_free_guidance else prior_latents
243
+
244
+ predicted_image_embedding = self.prior(
245
+ latent_model_input,
246
+ timestep=t,
247
+ proj_embedding=text_embeddings,
248
+ encoder_hidden_states=text_encoder_hidden_states,
249
+ attention_mask=text_mask,
250
+ ).predicted_image_embedding
251
+
252
+ if do_classifier_free_guidance:
253
+ predicted_image_embedding_uncond, predicted_image_embedding_text = predicted_image_embedding.chunk(2)
254
+ predicted_image_embedding = predicted_image_embedding_uncond + prior_guidance_scale * (
255
+ predicted_image_embedding_text - predicted_image_embedding_uncond
256
+ )
257
+
258
+ if i + 1 == prior_timesteps_tensor.shape[0]:
259
+ prev_timestep = None
260
+ else:
261
+ prev_timestep = prior_timesteps_tensor[i + 1]
262
+
263
+ prior_latents = self.prior_scheduler.step(
264
+ predicted_image_embedding,
265
+ timestep=t,
266
+ sample=prior_latents,
267
+ generator=generator,
268
+ prev_timestep=prev_timestep,
269
+ ).prev_sample
270
+
271
+ prior_latents = self.prior.post_process_latents(prior_latents)
272
+
273
+ image_embeddings = prior_latents
274
+
275
+ output = self.decoder_pipe(
276
+ image=image_embeddings,
277
+ height=height,
278
+ width=width,
279
+ num_inference_steps=decoder_num_inference_steps,
280
+ guidance_scale=decoder_guidance_scale,
281
+ generator=generator,
282
+ output_type=output_type,
283
+ return_dict=return_dict,
284
+ num_images_per_prompt=decoder_num_images_per_prompt,
285
+ eta=decoder_eta,
286
+ )
287
+ return output
v0.19.2/text_inpainting.py ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Callable, List, Optional, Union
2
+
3
+ import PIL
4
+ import torch
5
+ from transformers import (
6
+ CLIPImageProcessor,
7
+ CLIPSegForImageSegmentation,
8
+ CLIPSegProcessor,
9
+ CLIPTextModel,
10
+ CLIPTokenizer,
11
+ )
12
+
13
+ from diffusers import DiffusionPipeline
14
+ from diffusers.configuration_utils import FrozenDict
15
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
16
+ from diffusers.pipelines.stable_diffusion import StableDiffusionInpaintPipeline
17
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
18
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
19
+ from diffusers.utils import deprecate, is_accelerate_available, logging
20
+
21
+
22
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
23
+
24
+
25
+ class TextInpainting(DiffusionPipeline):
26
+ r"""
27
+ Pipeline for text based inpainting using Stable Diffusion.
28
+ Uses CLIPSeg to get a mask from the given text, then calls the Inpainting pipeline with the generated mask
29
+
30
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
31
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
32
+
33
+ Args:
34
+ segmentation_model ([`CLIPSegForImageSegmentation`]):
35
+ CLIPSeg Model to generate mask from the given text. Please refer to the [model card]() for details.
36
+ segmentation_processor ([`CLIPSegProcessor`]):
37
+ CLIPSeg processor to get image, text features to translate prompt to English, if necessary. Please refer to the
38
+ [model card](https://huggingface.co/docs/transformers/model_doc/clipseg) for details.
39
+ vae ([`AutoencoderKL`]):
40
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
41
+ text_encoder ([`CLIPTextModel`]):
42
+ Frozen text-encoder. Stable Diffusion uses the text portion of
43
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
44
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
45
+ tokenizer (`CLIPTokenizer`):
46
+ Tokenizer of class
47
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
48
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
49
+ scheduler ([`SchedulerMixin`]):
50
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
51
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
52
+ safety_checker ([`StableDiffusionSafetyChecker`]):
53
+ Classification module that estimates whether generated images could be considered offensive or harmful.
54
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
55
+ feature_extractor ([`CLIPImageProcessor`]):
56
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
57
+ """
58
+
59
+ def __init__(
60
+ self,
61
+ segmentation_model: CLIPSegForImageSegmentation,
62
+ segmentation_processor: CLIPSegProcessor,
63
+ vae: AutoencoderKL,
64
+ text_encoder: CLIPTextModel,
65
+ tokenizer: CLIPTokenizer,
66
+ unet: UNet2DConditionModel,
67
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
68
+ safety_checker: StableDiffusionSafetyChecker,
69
+ feature_extractor: CLIPImageProcessor,
70
+ ):
71
+ super().__init__()
72
+
73
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
74
+ deprecation_message = (
75
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
76
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
77
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
78
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
79
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
80
+ " file"
81
+ )
82
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
83
+ new_config = dict(scheduler.config)
84
+ new_config["steps_offset"] = 1
85
+ scheduler._internal_dict = FrozenDict(new_config)
86
+
87
+ if hasattr(scheduler.config, "skip_prk_steps") and scheduler.config.skip_prk_steps is False:
88
+ deprecation_message = (
89
+ f"The configuration file of this scheduler: {scheduler} has not set the configuration"
90
+ " `skip_prk_steps`. `skip_prk_steps` should be set to True in the configuration file. Please make"
91
+ " sure to update the config accordingly as not setting `skip_prk_steps` in the config might lead to"
92
+ " incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face"
93
+ " Hub, it would be very nice if you could open a Pull request for the"
94
+ " `scheduler/scheduler_config.json` file"
95
+ )
96
+ deprecate("skip_prk_steps not set", "1.0.0", deprecation_message, standard_warn=False)
97
+ new_config = dict(scheduler.config)
98
+ new_config["skip_prk_steps"] = True
99
+ scheduler._internal_dict = FrozenDict(new_config)
100
+
101
+ if safety_checker is None:
102
+ logger.warning(
103
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
104
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
105
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
106
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
107
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
108
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
109
+ )
110
+
111
+ self.register_modules(
112
+ segmentation_model=segmentation_model,
113
+ segmentation_processor=segmentation_processor,
114
+ vae=vae,
115
+ text_encoder=text_encoder,
116
+ tokenizer=tokenizer,
117
+ unet=unet,
118
+ scheduler=scheduler,
119
+ safety_checker=safety_checker,
120
+ feature_extractor=feature_extractor,
121
+ )
122
+
123
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
124
+ r"""
125
+ Enable sliced attention computation.
126
+
127
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
128
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
129
+
130
+ Args:
131
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
132
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
133
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
134
+ `attention_head_dim` must be a multiple of `slice_size`.
135
+ """
136
+ if slice_size == "auto":
137
+ # half the attention head size is usually a good trade-off between
138
+ # speed and memory
139
+ slice_size = self.unet.config.attention_head_dim // 2
140
+ self.unet.set_attention_slice(slice_size)
141
+
142
+ def disable_attention_slicing(self):
143
+ r"""
144
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
145
+ back to computing attention in one step.
146
+ """
147
+ # set slice_size = `None` to disable `attention slicing`
148
+ self.enable_attention_slicing(None)
149
+
150
+ def enable_sequential_cpu_offload(self):
151
+ r"""
152
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
153
+ text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
154
+ `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
155
+ """
156
+ if is_accelerate_available():
157
+ from accelerate import cpu_offload
158
+ else:
159
+ raise ImportError("Please install accelerate via `pip install accelerate`")
160
+
161
+ device = torch.device("cuda")
162
+
163
+ for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.safety_checker]:
164
+ if cpu_offloaded_model is not None:
165
+ cpu_offload(cpu_offloaded_model, device)
166
+
167
+ @property
168
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device
169
+ def _execution_device(self):
170
+ r"""
171
+ Returns the device on which the pipeline's models will be executed. After calling
172
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
173
+ hooks.
174
+ """
175
+ if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
176
+ return self.device
177
+ for module in self.unet.modules():
178
+ if (
179
+ hasattr(module, "_hf_hook")
180
+ and hasattr(module._hf_hook, "execution_device")
181
+ and module._hf_hook.execution_device is not None
182
+ ):
183
+ return torch.device(module._hf_hook.execution_device)
184
+ return self.device
185
+
186
+ @torch.no_grad()
187
+ def __call__(
188
+ self,
189
+ prompt: Union[str, List[str]],
190
+ image: Union[torch.FloatTensor, PIL.Image.Image],
191
+ text: str,
192
+ height: int = 512,
193
+ width: int = 512,
194
+ num_inference_steps: int = 50,
195
+ guidance_scale: float = 7.5,
196
+ negative_prompt: Optional[Union[str, List[str]]] = None,
197
+ num_images_per_prompt: Optional[int] = 1,
198
+ eta: float = 0.0,
199
+ generator: Optional[torch.Generator] = None,
200
+ latents: Optional[torch.FloatTensor] = None,
201
+ output_type: Optional[str] = "pil",
202
+ return_dict: bool = True,
203
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
204
+ callback_steps: int = 1,
205
+ **kwargs,
206
+ ):
207
+ r"""
208
+ Function invoked when calling the pipeline for generation.
209
+
210
+ Args:
211
+ prompt (`str` or `List[str]`):
212
+ The prompt or prompts to guide the image generation.
213
+ image (`PIL.Image.Image`):
214
+ `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
215
+ be masked out with `mask_image` and repainted according to `prompt`.
216
+ text (`str``):
217
+ The text to use to generate the mask.
218
+ height (`int`, *optional*, defaults to 512):
219
+ The height in pixels of the generated image.
220
+ width (`int`, *optional*, defaults to 512):
221
+ The width in pixels of the generated image.
222
+ num_inference_steps (`int`, *optional*, defaults to 50):
223
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
224
+ expense of slower inference.
225
+ guidance_scale (`float`, *optional*, defaults to 7.5):
226
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
227
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
228
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
229
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
230
+ usually at the expense of lower image quality.
231
+ negative_prompt (`str` or `List[str]`, *optional*):
232
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
233
+ if `guidance_scale` is less than `1`).
234
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
235
+ The number of images to generate per prompt.
236
+ eta (`float`, *optional*, defaults to 0.0):
237
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
238
+ [`schedulers.DDIMScheduler`], will be ignored for others.
239
+ generator (`torch.Generator`, *optional*):
240
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
241
+ deterministic.
242
+ latents (`torch.FloatTensor`, *optional*):
243
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
244
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
245
+ tensor will ge generated by sampling using the supplied random `generator`.
246
+ output_type (`str`, *optional*, defaults to `"pil"`):
247
+ The output format of the generate image. Choose between
248
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
249
+ return_dict (`bool`, *optional*, defaults to `True`):
250
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
251
+ plain tuple.
252
+ callback (`Callable`, *optional*):
253
+ A function that will be called every `callback_steps` steps during inference. The function will be
254
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
255
+ callback_steps (`int`, *optional*, defaults to 1):
256
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
257
+ called at every step.
258
+
259
+ Returns:
260
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
261
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
262
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
263
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
264
+ (nsfw) content, according to the `safety_checker`.
265
+ """
266
+
267
+ # We use the input text to generate the mask
268
+ inputs = self.segmentation_processor(
269
+ text=[text], images=[image], padding="max_length", return_tensors="pt"
270
+ ).to(self.device)
271
+ outputs = self.segmentation_model(**inputs)
272
+ mask = torch.sigmoid(outputs.logits).cpu().detach().unsqueeze(-1).numpy()
273
+ mask_pil = self.numpy_to_pil(mask)[0].resize(image.size)
274
+
275
+ # Run inpainting pipeline with the generated mask
276
+ inpainting_pipeline = StableDiffusionInpaintPipeline(
277
+ vae=self.vae,
278
+ text_encoder=self.text_encoder,
279
+ tokenizer=self.tokenizer,
280
+ unet=self.unet,
281
+ scheduler=self.scheduler,
282
+ safety_checker=self.safety_checker,
283
+ feature_extractor=self.feature_extractor,
284
+ )
285
+ return inpainting_pipeline(
286
+ prompt=prompt,
287
+ image=image,
288
+ mask_image=mask_pil,
289
+ height=height,
290
+ width=width,
291
+ num_inference_steps=num_inference_steps,
292
+ guidance_scale=guidance_scale,
293
+ negative_prompt=negative_prompt,
294
+ num_images_per_prompt=num_images_per_prompt,
295
+ eta=eta,
296
+ generator=generator,
297
+ latents=latents,
298
+ output_type=output_type,
299
+ return_dict=return_dict,
300
+ callback=callback,
301
+ callback_steps=callback_steps,
302
+ )
v0.19.2/tiled_upscaling.py ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 Peter Willemsen <peter@codebuffet.co>. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import math
16
+ from typing import Callable, List, Optional, Union
17
+
18
+ import numpy as np
19
+ import PIL
20
+ import torch
21
+ from PIL import Image
22
+ from transformers import CLIPTextModel, CLIPTokenizer
23
+
24
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
25
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale import StableDiffusionUpscalePipeline
26
+ from diffusers.schedulers import DDIMScheduler, DDPMScheduler, LMSDiscreteScheduler, PNDMScheduler
27
+
28
+
29
+ def make_transparency_mask(size, overlap_pixels, remove_borders=[]):
30
+ size_x = size[0] - overlap_pixels * 2
31
+ size_y = size[1] - overlap_pixels * 2
32
+ for letter in ["l", "r"]:
33
+ if letter in remove_borders:
34
+ size_x += overlap_pixels
35
+ for letter in ["t", "b"]:
36
+ if letter in remove_borders:
37
+ size_y += overlap_pixels
38
+ mask = np.ones((size_y, size_x), dtype=np.uint8) * 255
39
+ mask = np.pad(mask, mode="linear_ramp", pad_width=overlap_pixels, end_values=0)
40
+
41
+ if "l" in remove_borders:
42
+ mask = mask[:, overlap_pixels : mask.shape[1]]
43
+ if "r" in remove_borders:
44
+ mask = mask[:, 0 : mask.shape[1] - overlap_pixels]
45
+ if "t" in remove_borders:
46
+ mask = mask[overlap_pixels : mask.shape[0], :]
47
+ if "b" in remove_borders:
48
+ mask = mask[0 : mask.shape[0] - overlap_pixels, :]
49
+ return mask
50
+
51
+
52
+ def clamp(n, smallest, largest):
53
+ return max(smallest, min(n, largest))
54
+
55
+
56
+ def clamp_rect(rect: [int], min: [int], max: [int]):
57
+ return (
58
+ clamp(rect[0], min[0], max[0]),
59
+ clamp(rect[1], min[1], max[1]),
60
+ clamp(rect[2], min[0], max[0]),
61
+ clamp(rect[3], min[1], max[1]),
62
+ )
63
+
64
+
65
+ def add_overlap_rect(rect: [int], overlap: int, image_size: [int]):
66
+ rect = list(rect)
67
+ rect[0] -= overlap
68
+ rect[1] -= overlap
69
+ rect[2] += overlap
70
+ rect[3] += overlap
71
+ rect = clamp_rect(rect, [0, 0], [image_size[0], image_size[1]])
72
+ return rect
73
+
74
+
75
+ def squeeze_tile(tile, original_image, original_slice, slice_x):
76
+ result = Image.new("RGB", (tile.size[0] + original_slice, tile.size[1]))
77
+ result.paste(
78
+ original_image.resize((tile.size[0], tile.size[1]), Image.BICUBIC).crop(
79
+ (slice_x, 0, slice_x + original_slice, tile.size[1])
80
+ ),
81
+ (0, 0),
82
+ )
83
+ result.paste(tile, (original_slice, 0))
84
+ return result
85
+
86
+
87
+ def unsqueeze_tile(tile, original_image_slice):
88
+ crop_rect = (original_image_slice * 4, 0, tile.size[0], tile.size[1])
89
+ tile = tile.crop(crop_rect)
90
+ return tile
91
+
92
+
93
+ def next_divisible(n, d):
94
+ divisor = n % d
95
+ return n - divisor
96
+
97
+
98
+ class StableDiffusionTiledUpscalePipeline(StableDiffusionUpscalePipeline):
99
+ r"""
100
+ Pipeline for tile-based text-guided image super-resolution using Stable Diffusion 2, trading memory for compute
101
+ to create gigantic images.
102
+
103
+ This model inherits from [`StableDiffusionUpscalePipeline`]. Check the superclass documentation for the generic methods the
104
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
105
+
106
+ Args:
107
+ vae ([`AutoencoderKL`]):
108
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
109
+ text_encoder ([`CLIPTextModel`]):
110
+ Frozen text-encoder. Stable Diffusion uses the text portion of
111
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
112
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
113
+ tokenizer (`CLIPTokenizer`):
114
+ Tokenizer of class
115
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
116
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
117
+ low_res_scheduler ([`SchedulerMixin`]):
118
+ A scheduler used to add initial noise to the low res conditioning image. It must be an instance of
119
+ [`DDPMScheduler`].
120
+ scheduler ([`SchedulerMixin`]):
121
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
122
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
123
+ """
124
+
125
+ def __init__(
126
+ self,
127
+ vae: AutoencoderKL,
128
+ text_encoder: CLIPTextModel,
129
+ tokenizer: CLIPTokenizer,
130
+ unet: UNet2DConditionModel,
131
+ low_res_scheduler: DDPMScheduler,
132
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
133
+ max_noise_level: int = 350,
134
+ ):
135
+ super().__init__(
136
+ vae=vae,
137
+ text_encoder=text_encoder,
138
+ tokenizer=tokenizer,
139
+ unet=unet,
140
+ low_res_scheduler=low_res_scheduler,
141
+ scheduler=scheduler,
142
+ max_noise_level=max_noise_level,
143
+ )
144
+
145
+ def _process_tile(self, original_image_slice, x, y, tile_size, tile_border, image, final_image, **kwargs):
146
+ torch.manual_seed(0)
147
+ crop_rect = (
148
+ min(image.size[0] - (tile_size + original_image_slice), x * tile_size),
149
+ min(image.size[1] - (tile_size + original_image_slice), y * tile_size),
150
+ min(image.size[0], (x + 1) * tile_size),
151
+ min(image.size[1], (y + 1) * tile_size),
152
+ )
153
+ crop_rect_with_overlap = add_overlap_rect(crop_rect, tile_border, image.size)
154
+ tile = image.crop(crop_rect_with_overlap)
155
+ translated_slice_x = ((crop_rect[0] + ((crop_rect[2] - crop_rect[0]) / 2)) / image.size[0]) * tile.size[0]
156
+ translated_slice_x = translated_slice_x - (original_image_slice / 2)
157
+ translated_slice_x = max(0, translated_slice_x)
158
+ to_input = squeeze_tile(tile, image, original_image_slice, translated_slice_x)
159
+ orig_input_size = to_input.size
160
+ to_input = to_input.resize((tile_size, tile_size), Image.BICUBIC)
161
+ upscaled_tile = super(StableDiffusionTiledUpscalePipeline, self).__call__(image=to_input, **kwargs).images[0]
162
+ upscaled_tile = upscaled_tile.resize((orig_input_size[0] * 4, orig_input_size[1] * 4), Image.BICUBIC)
163
+ upscaled_tile = unsqueeze_tile(upscaled_tile, original_image_slice)
164
+ upscaled_tile = upscaled_tile.resize((tile.size[0] * 4, tile.size[1] * 4), Image.BICUBIC)
165
+ remove_borders = []
166
+ if x == 0:
167
+ remove_borders.append("l")
168
+ elif crop_rect[2] == image.size[0]:
169
+ remove_borders.append("r")
170
+ if y == 0:
171
+ remove_borders.append("t")
172
+ elif crop_rect[3] == image.size[1]:
173
+ remove_borders.append("b")
174
+ transparency_mask = Image.fromarray(
175
+ make_transparency_mask(
176
+ (upscaled_tile.size[0], upscaled_tile.size[1]), tile_border * 4, remove_borders=remove_borders
177
+ ),
178
+ mode="L",
179
+ )
180
+ final_image.paste(
181
+ upscaled_tile, (crop_rect_with_overlap[0] * 4, crop_rect_with_overlap[1] * 4), transparency_mask
182
+ )
183
+
184
+ @torch.no_grad()
185
+ def __call__(
186
+ self,
187
+ prompt: Union[str, List[str]],
188
+ image: Union[PIL.Image.Image, List[PIL.Image.Image]],
189
+ num_inference_steps: int = 75,
190
+ guidance_scale: float = 9.0,
191
+ noise_level: int = 50,
192
+ negative_prompt: Optional[Union[str, List[str]]] = None,
193
+ num_images_per_prompt: Optional[int] = 1,
194
+ eta: float = 0.0,
195
+ generator: Optional[torch.Generator] = None,
196
+ latents: Optional[torch.FloatTensor] = None,
197
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
198
+ callback_steps: int = 1,
199
+ tile_size: int = 128,
200
+ tile_border: int = 32,
201
+ original_image_slice: int = 32,
202
+ ):
203
+ r"""
204
+ Function invoked when calling the pipeline for generation.
205
+
206
+ Args:
207
+ prompt (`str` or `List[str]`):
208
+ The prompt or prompts to guide the image generation.
209
+ image (`PIL.Image.Image` or List[`PIL.Image.Image`] or `torch.FloatTensor`):
210
+ `Image`, or tensor representing an image batch which will be upscaled. *
211
+ num_inference_steps (`int`, *optional*, defaults to 50):
212
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
213
+ expense of slower inference.
214
+ guidance_scale (`float`, *optional*, defaults to 7.5):
215
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
216
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
217
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
218
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
219
+ usually at the expense of lower image quality.
220
+ negative_prompt (`str` or `List[str]`, *optional*):
221
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
222
+ if `guidance_scale` is less than `1`).
223
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
224
+ The number of images to generate per prompt.
225
+ eta (`float`, *optional*, defaults to 0.0):
226
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
227
+ [`schedulers.DDIMScheduler`], will be ignored for others.
228
+ generator (`torch.Generator`, *optional*):
229
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
230
+ deterministic.
231
+ latents (`torch.FloatTensor`, *optional*):
232
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
233
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
234
+ tensor will ge generated by sampling using the supplied random `generator`.
235
+ tile_size (`int`, *optional*):
236
+ The size of the tiles. Too big can result in an OOM-error.
237
+ tile_border (`int`, *optional*):
238
+ The number of pixels around a tile to consider (bigger means less seams, too big can lead to an OOM-error).
239
+ original_image_slice (`int`, *optional*):
240
+ The amount of pixels of the original image to calculate with the current tile (bigger means more depth
241
+ is preserved, less blur occurs in the final image, too big can lead to an OOM-error or loss in detail).
242
+ callback (`Callable`, *optional*):
243
+ A function that take a callback function with a single argument, a dict,
244
+ that contains the (partially) processed image under "image",
245
+ as well as the progress (0 to 1, where 1 is completed) under "progress".
246
+
247
+ Returns: A PIL.Image that is 4 times larger than the original input image.
248
+
249
+ """
250
+
251
+ final_image = Image.new("RGB", (image.size[0] * 4, image.size[1] * 4))
252
+ tcx = math.ceil(image.size[0] / tile_size)
253
+ tcy = math.ceil(image.size[1] / tile_size)
254
+ total_tile_count = tcx * tcy
255
+ current_count = 0
256
+ for y in range(tcy):
257
+ for x in range(tcx):
258
+ self._process_tile(
259
+ original_image_slice,
260
+ x,
261
+ y,
262
+ tile_size,
263
+ tile_border,
264
+ image,
265
+ final_image,
266
+ prompt=prompt,
267
+ num_inference_steps=num_inference_steps,
268
+ guidance_scale=guidance_scale,
269
+ noise_level=noise_level,
270
+ negative_prompt=negative_prompt,
271
+ num_images_per_prompt=num_images_per_prompt,
272
+ eta=eta,
273
+ generator=generator,
274
+ latents=latents,
275
+ )
276
+ current_count += 1
277
+ if callback is not None:
278
+ callback({"progress": current_count / total_tile_count, "image": final_image})
279
+ return final_image
280
+
281
+
282
+ def main():
283
+ # Run a demo
284
+ model_id = "stabilityai/stable-diffusion-x4-upscaler"
285
+ pipe = StableDiffusionTiledUpscalePipeline.from_pretrained(model_id, revision="fp16", torch_dtype=torch.float16)
286
+ pipe = pipe.to("cuda")
287
+ image = Image.open("../../docs/source/imgs/diffusers_library.jpg")
288
+
289
+ def callback(obj):
290
+ print(f"progress: {obj['progress']:.4f}")
291
+ obj["image"].save("diffusers_library_progress.jpg")
292
+
293
+ final_image = pipe(image=image, prompt="Black font, white background, vector", noise_level=40, callback=callback)
294
+ final_image.save("diffusers_library.jpg")
295
+
296
+
297
+ if __name__ == "__main__":
298
+ main()
v0.19.2/unclip_image_interpolation.py ADDED
@@ -0,0 +1,495 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import List, Optional, Union
3
+
4
+ import PIL
5
+ import torch
6
+ from torch.nn import functional as F
7
+ from transformers import (
8
+ CLIPImageProcessor,
9
+ CLIPTextModelWithProjection,
10
+ CLIPTokenizer,
11
+ CLIPVisionModelWithProjection,
12
+ )
13
+
14
+ from diffusers import (
15
+ DiffusionPipeline,
16
+ ImagePipelineOutput,
17
+ UnCLIPScheduler,
18
+ UNet2DConditionModel,
19
+ UNet2DModel,
20
+ )
21
+ from diffusers.pipelines.unclip import UnCLIPTextProjModel
22
+ from diffusers.utils import is_accelerate_available, logging, randn_tensor
23
+
24
+
25
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
26
+
27
+
28
+ def slerp(val, low, high):
29
+ """
30
+ Find the interpolation point between the 'low' and 'high' values for the given 'val'. See https://en.wikipedia.org/wiki/Slerp for more details on the topic.
31
+ """
32
+ low_norm = low / torch.norm(low)
33
+ high_norm = high / torch.norm(high)
34
+ omega = torch.acos((low_norm * high_norm))
35
+ so = torch.sin(omega)
36
+ res = (torch.sin((1.0 - val) * omega) / so) * low + (torch.sin(val * omega) / so) * high
37
+ return res
38
+
39
+
40
+ class UnCLIPImageInterpolationPipeline(DiffusionPipeline):
41
+ """
42
+ Pipeline to generate variations from an input image using unCLIP
43
+
44
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
45
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
46
+
47
+ Args:
48
+ text_encoder ([`CLIPTextModelWithProjection`]):
49
+ Frozen text-encoder.
50
+ tokenizer (`CLIPTokenizer`):
51
+ Tokenizer of class
52
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
53
+ feature_extractor ([`CLIPImageProcessor`]):
54
+ Model that extracts features from generated images to be used as inputs for the `image_encoder`.
55
+ image_encoder ([`CLIPVisionModelWithProjection`]):
56
+ Frozen CLIP image-encoder. unCLIP Image Variation uses the vision portion of
57
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModelWithProjection),
58
+ specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
59
+ text_proj ([`UnCLIPTextProjModel`]):
60
+ Utility class to prepare and combine the embeddings before they are passed to the decoder.
61
+ decoder ([`UNet2DConditionModel`]):
62
+ The decoder to invert the image embedding into an image.
63
+ super_res_first ([`UNet2DModel`]):
64
+ Super resolution unet. Used in all but the last step of the super resolution diffusion process.
65
+ super_res_last ([`UNet2DModel`]):
66
+ Super resolution unet. Used in the last step of the super resolution diffusion process.
67
+ decoder_scheduler ([`UnCLIPScheduler`]):
68
+ Scheduler used in the decoder denoising process. Just a modified DDPMScheduler.
69
+ super_res_scheduler ([`UnCLIPScheduler`]):
70
+ Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler.
71
+
72
+ """
73
+
74
+ decoder: UNet2DConditionModel
75
+ text_proj: UnCLIPTextProjModel
76
+ text_encoder: CLIPTextModelWithProjection
77
+ tokenizer: CLIPTokenizer
78
+ feature_extractor: CLIPImageProcessor
79
+ image_encoder: CLIPVisionModelWithProjection
80
+ super_res_first: UNet2DModel
81
+ super_res_last: UNet2DModel
82
+
83
+ decoder_scheduler: UnCLIPScheduler
84
+ super_res_scheduler: UnCLIPScheduler
85
+
86
+ # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline.__init__
87
+ def __init__(
88
+ self,
89
+ decoder: UNet2DConditionModel,
90
+ text_encoder: CLIPTextModelWithProjection,
91
+ tokenizer: CLIPTokenizer,
92
+ text_proj: UnCLIPTextProjModel,
93
+ feature_extractor: CLIPImageProcessor,
94
+ image_encoder: CLIPVisionModelWithProjection,
95
+ super_res_first: UNet2DModel,
96
+ super_res_last: UNet2DModel,
97
+ decoder_scheduler: UnCLIPScheduler,
98
+ super_res_scheduler: UnCLIPScheduler,
99
+ ):
100
+ super().__init__()
101
+
102
+ self.register_modules(
103
+ decoder=decoder,
104
+ text_encoder=text_encoder,
105
+ tokenizer=tokenizer,
106
+ text_proj=text_proj,
107
+ feature_extractor=feature_extractor,
108
+ image_encoder=image_encoder,
109
+ super_res_first=super_res_first,
110
+ super_res_last=super_res_last,
111
+ decoder_scheduler=decoder_scheduler,
112
+ super_res_scheduler=super_res_scheduler,
113
+ )
114
+
115
+ # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
116
+ def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
117
+ if latents is None:
118
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
119
+ else:
120
+ if latents.shape != shape:
121
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
122
+ latents = latents.to(device)
123
+
124
+ latents = latents * scheduler.init_noise_sigma
125
+ return latents
126
+
127
+ # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_prompt
128
+ def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance):
129
+ batch_size = len(prompt) if isinstance(prompt, list) else 1
130
+
131
+ # get prompt text embeddings
132
+ text_inputs = self.tokenizer(
133
+ prompt,
134
+ padding="max_length",
135
+ max_length=self.tokenizer.model_max_length,
136
+ return_tensors="pt",
137
+ )
138
+ text_input_ids = text_inputs.input_ids
139
+ text_mask = text_inputs.attention_mask.bool().to(device)
140
+ text_encoder_output = self.text_encoder(text_input_ids.to(device))
141
+
142
+ prompt_embeds = text_encoder_output.text_embeds
143
+ text_encoder_hidden_states = text_encoder_output.last_hidden_state
144
+
145
+ prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
146
+ text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)
147
+ text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0)
148
+
149
+ if do_classifier_free_guidance:
150
+ uncond_tokens = [""] * batch_size
151
+
152
+ max_length = text_input_ids.shape[-1]
153
+ uncond_input = self.tokenizer(
154
+ uncond_tokens,
155
+ padding="max_length",
156
+ max_length=max_length,
157
+ truncation=True,
158
+ return_tensors="pt",
159
+ )
160
+ uncond_text_mask = uncond_input.attention_mask.bool().to(device)
161
+ negative_prompt_embeds_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device))
162
+
163
+ negative_prompt_embeds = negative_prompt_embeds_text_encoder_output.text_embeds
164
+ uncond_text_encoder_hidden_states = negative_prompt_embeds_text_encoder_output.last_hidden_state
165
+
166
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
167
+
168
+ seq_len = negative_prompt_embeds.shape[1]
169
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt)
170
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len)
171
+
172
+ seq_len = uncond_text_encoder_hidden_states.shape[1]
173
+ uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1)
174
+ uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view(
175
+ batch_size * num_images_per_prompt, seq_len, -1
176
+ )
177
+ uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0)
178
+
179
+ # done duplicates
180
+
181
+ # For classifier free guidance, we need to do two forward passes.
182
+ # Here we concatenate the unconditional and text embeddings into a single batch
183
+ # to avoid doing two forward passes
184
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
185
+ text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states])
186
+
187
+ text_mask = torch.cat([uncond_text_mask, text_mask])
188
+
189
+ return prompt_embeds, text_encoder_hidden_states, text_mask
190
+
191
+ # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_image
192
+ def _encode_image(self, image, device, num_images_per_prompt, image_embeddings: Optional[torch.Tensor] = None):
193
+ dtype = next(self.image_encoder.parameters()).dtype
194
+
195
+ if image_embeddings is None:
196
+ if not isinstance(image, torch.Tensor):
197
+ image = self.feature_extractor(images=image, return_tensors="pt").pixel_values
198
+
199
+ image = image.to(device=device, dtype=dtype)
200
+ image_embeddings = self.image_encoder(image).image_embeds
201
+
202
+ image_embeddings = image_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
203
+
204
+ return image_embeddings
205
+
206
+ # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline.enable_sequential_cpu_offload
207
+ def enable_sequential_cpu_offload(self, gpu_id=0):
208
+ r"""
209
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline's
210
+ models have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only
211
+ when their specific submodule has its `forward` method called.
212
+ """
213
+ if is_accelerate_available():
214
+ from accelerate import cpu_offload
215
+ else:
216
+ raise ImportError("Please install accelerate via `pip install accelerate`")
217
+
218
+ device = torch.device(f"cuda:{gpu_id}")
219
+
220
+ models = [
221
+ self.decoder,
222
+ self.text_proj,
223
+ self.text_encoder,
224
+ self.super_res_first,
225
+ self.super_res_last,
226
+ ]
227
+ for cpu_offloaded_model in models:
228
+ if cpu_offloaded_model is not None:
229
+ cpu_offload(cpu_offloaded_model, device)
230
+
231
+ @property
232
+ # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._execution_device
233
+ def _execution_device(self):
234
+ r"""
235
+ Returns the device on which the pipeline's models will be executed. After calling
236
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
237
+ hooks.
238
+ """
239
+ if self.device != torch.device("meta") or not hasattr(self.decoder, "_hf_hook"):
240
+ return self.device
241
+ for module in self.decoder.modules():
242
+ if (
243
+ hasattr(module, "_hf_hook")
244
+ and hasattr(module._hf_hook, "execution_device")
245
+ and module._hf_hook.execution_device is not None
246
+ ):
247
+ return torch.device(module._hf_hook.execution_device)
248
+ return self.device
249
+
250
+ @torch.no_grad()
251
+ def __call__(
252
+ self,
253
+ image: Optional[Union[List[PIL.Image.Image], torch.FloatTensor]] = None,
254
+ steps: int = 5,
255
+ decoder_num_inference_steps: int = 25,
256
+ super_res_num_inference_steps: int = 7,
257
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
258
+ image_embeddings: Optional[torch.Tensor] = None,
259
+ decoder_latents: Optional[torch.FloatTensor] = None,
260
+ super_res_latents: Optional[torch.FloatTensor] = None,
261
+ decoder_guidance_scale: float = 8.0,
262
+ output_type: Optional[str] = "pil",
263
+ return_dict: bool = True,
264
+ ):
265
+ """
266
+ Function invoked when calling the pipeline for generation.
267
+
268
+ Args:
269
+ image (`List[PIL.Image.Image]` or `torch.FloatTensor`):
270
+ The images to use for the image interpolation. Only accepts a list of two PIL Images or If you provide a tensor, it needs to comply with the
271
+ configuration of
272
+ [this](https://huggingface.co/fusing/karlo-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json)
273
+ `CLIPImageProcessor` while still having a shape of two in the 0th dimension. Can be left to `None` only when `image_embeddings` are passed.
274
+ steps (`int`, *optional*, defaults to 5):
275
+ The number of interpolation images to generate.
276
+ decoder_num_inference_steps (`int`, *optional*, defaults to 25):
277
+ The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality
278
+ image at the expense of slower inference.
279
+ super_res_num_inference_steps (`int`, *optional*, defaults to 7):
280
+ The number of denoising steps for super resolution. More denoising steps usually lead to a higher
281
+ quality image at the expense of slower inference.
282
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
283
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
284
+ to make generation deterministic.
285
+ image_embeddings (`torch.Tensor`, *optional*):
286
+ Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings
287
+ can be passed for tasks like image interpolations. `image` can the be left to `None`.
288
+ decoder_latents (`torch.FloatTensor` of shape (batch size, channels, height, width), *optional*):
289
+ Pre-generated noisy latents to be used as inputs for the decoder.
290
+ super_res_latents (`torch.FloatTensor` of shape (batch size, channels, super res height, super res width), *optional*):
291
+ Pre-generated noisy latents to be used as inputs for the decoder.
292
+ decoder_guidance_scale (`float`, *optional*, defaults to 4.0):
293
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
294
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
295
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
296
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
297
+ usually at the expense of lower image quality.
298
+ output_type (`str`, *optional*, defaults to `"pil"`):
299
+ The output format of the generated image. Choose between
300
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
301
+ return_dict (`bool`, *optional*, defaults to `True`):
302
+ Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
303
+ """
304
+
305
+ batch_size = steps
306
+
307
+ device = self._execution_device
308
+
309
+ if isinstance(image, List):
310
+ if len(image) != 2:
311
+ raise AssertionError(
312
+ f"Expected 'image' List to be of size 2, but passed 'image' length is {len(image)}"
313
+ )
314
+ elif not (isinstance(image[0], PIL.Image.Image) and isinstance(image[0], PIL.Image.Image)):
315
+ raise AssertionError(
316
+ f"Expected 'image' List to contain PIL.Image.Image, but passed 'image' contents are {type(image[0])} and {type(image[1])}"
317
+ )
318
+ elif isinstance(image, torch.FloatTensor):
319
+ if image.shape[0] != 2:
320
+ raise AssertionError(
321
+ f"Expected 'image' to be torch.FloatTensor of shape 2 in 0th dimension, but passed 'image' size is {image.shape[0]}"
322
+ )
323
+ elif isinstance(image_embeddings, torch.Tensor):
324
+ if image_embeddings.shape[0] != 2:
325
+ raise AssertionError(
326
+ f"Expected 'image_embeddings' to be torch.FloatTensor of shape 2 in 0th dimension, but passed 'image_embeddings' shape is {image_embeddings.shape[0]}"
327
+ )
328
+ else:
329
+ raise AssertionError(
330
+ f"Expected 'image' or 'image_embeddings' to be not None with types List[PIL.Image] or Torch.FloatTensor respectively. Received {type(image)} and {type(image_embeddings)} repsectively"
331
+ )
332
+
333
+ original_image_embeddings = self._encode_image(
334
+ image=image, device=device, num_images_per_prompt=1, image_embeddings=image_embeddings
335
+ )
336
+
337
+ image_embeddings = []
338
+
339
+ for interp_step in torch.linspace(0, 1, steps):
340
+ temp_image_embeddings = slerp(
341
+ interp_step, original_image_embeddings[0], original_image_embeddings[1]
342
+ ).unsqueeze(0)
343
+ image_embeddings.append(temp_image_embeddings)
344
+
345
+ image_embeddings = torch.cat(image_embeddings).to(device)
346
+
347
+ do_classifier_free_guidance = decoder_guidance_scale > 1.0
348
+
349
+ prompt_embeds, text_encoder_hidden_states, text_mask = self._encode_prompt(
350
+ prompt=["" for i in range(steps)],
351
+ device=device,
352
+ num_images_per_prompt=1,
353
+ do_classifier_free_guidance=do_classifier_free_guidance,
354
+ )
355
+
356
+ text_encoder_hidden_states, additive_clip_time_embeddings = self.text_proj(
357
+ image_embeddings=image_embeddings,
358
+ prompt_embeds=prompt_embeds,
359
+ text_encoder_hidden_states=text_encoder_hidden_states,
360
+ do_classifier_free_guidance=do_classifier_free_guidance,
361
+ )
362
+
363
+ if device.type == "mps":
364
+ # HACK: MPS: There is a panic when padding bool tensors,
365
+ # so cast to int tensor for the pad and back to bool afterwards
366
+ text_mask = text_mask.type(torch.int)
367
+ decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=1)
368
+ decoder_text_mask = decoder_text_mask.type(torch.bool)
369
+ else:
370
+ decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=True)
371
+
372
+ self.decoder_scheduler.set_timesteps(decoder_num_inference_steps, device=device)
373
+ decoder_timesteps_tensor = self.decoder_scheduler.timesteps
374
+
375
+ num_channels_latents = self.decoder.config.in_channels
376
+ height = self.decoder.config.sample_size
377
+ width = self.decoder.config.sample_size
378
+
379
+ # Get the decoder latents for 1 step and then repeat the same tensor for the entire batch to keep same noise across all interpolation steps.
380
+ decoder_latents = self.prepare_latents(
381
+ (1, num_channels_latents, height, width),
382
+ text_encoder_hidden_states.dtype,
383
+ device,
384
+ generator,
385
+ decoder_latents,
386
+ self.decoder_scheduler,
387
+ )
388
+ decoder_latents = decoder_latents.repeat((batch_size, 1, 1, 1))
389
+
390
+ for i, t in enumerate(self.progress_bar(decoder_timesteps_tensor)):
391
+ # expand the latents if we are doing classifier free guidance
392
+ latent_model_input = torch.cat([decoder_latents] * 2) if do_classifier_free_guidance else decoder_latents
393
+
394
+ noise_pred = self.decoder(
395
+ sample=latent_model_input,
396
+ timestep=t,
397
+ encoder_hidden_states=text_encoder_hidden_states,
398
+ class_labels=additive_clip_time_embeddings,
399
+ attention_mask=decoder_text_mask,
400
+ ).sample
401
+
402
+ if do_classifier_free_guidance:
403
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
404
+ noise_pred_uncond, _ = noise_pred_uncond.split(latent_model_input.shape[1], dim=1)
405
+ noise_pred_text, predicted_variance = noise_pred_text.split(latent_model_input.shape[1], dim=1)
406
+ noise_pred = noise_pred_uncond + decoder_guidance_scale * (noise_pred_text - noise_pred_uncond)
407
+ noise_pred = torch.cat([noise_pred, predicted_variance], dim=1)
408
+
409
+ if i + 1 == decoder_timesteps_tensor.shape[0]:
410
+ prev_timestep = None
411
+ else:
412
+ prev_timestep = decoder_timesteps_tensor[i + 1]
413
+
414
+ # compute the previous noisy sample x_t -> x_t-1
415
+ decoder_latents = self.decoder_scheduler.step(
416
+ noise_pred, t, decoder_latents, prev_timestep=prev_timestep, generator=generator
417
+ ).prev_sample
418
+
419
+ decoder_latents = decoder_latents.clamp(-1, 1)
420
+
421
+ image_small = decoder_latents
422
+
423
+ # done decoder
424
+
425
+ # super res
426
+
427
+ self.super_res_scheduler.set_timesteps(super_res_num_inference_steps, device=device)
428
+ super_res_timesteps_tensor = self.super_res_scheduler.timesteps
429
+
430
+ channels = self.super_res_first.config.in_channels // 2
431
+ height = self.super_res_first.config.sample_size
432
+ width = self.super_res_first.config.sample_size
433
+
434
+ super_res_latents = self.prepare_latents(
435
+ (batch_size, channels, height, width),
436
+ image_small.dtype,
437
+ device,
438
+ generator,
439
+ super_res_latents,
440
+ self.super_res_scheduler,
441
+ )
442
+
443
+ if device.type == "mps":
444
+ # MPS does not support many interpolations
445
+ image_upscaled = F.interpolate(image_small, size=[height, width])
446
+ else:
447
+ interpolate_antialias = {}
448
+ if "antialias" in inspect.signature(F.interpolate).parameters:
449
+ interpolate_antialias["antialias"] = True
450
+
451
+ image_upscaled = F.interpolate(
452
+ image_small, size=[height, width], mode="bicubic", align_corners=False, **interpolate_antialias
453
+ )
454
+
455
+ for i, t in enumerate(self.progress_bar(super_res_timesteps_tensor)):
456
+ # no classifier free guidance
457
+
458
+ if i == super_res_timesteps_tensor.shape[0] - 1:
459
+ unet = self.super_res_last
460
+ else:
461
+ unet = self.super_res_first
462
+
463
+ latent_model_input = torch.cat([super_res_latents, image_upscaled], dim=1)
464
+
465
+ noise_pred = unet(
466
+ sample=latent_model_input,
467
+ timestep=t,
468
+ ).sample
469
+
470
+ if i + 1 == super_res_timesteps_tensor.shape[0]:
471
+ prev_timestep = None
472
+ else:
473
+ prev_timestep = super_res_timesteps_tensor[i + 1]
474
+
475
+ # compute the previous noisy sample x_t -> x_t-1
476
+ super_res_latents = self.super_res_scheduler.step(
477
+ noise_pred, t, super_res_latents, prev_timestep=prev_timestep, generator=generator
478
+ ).prev_sample
479
+
480
+ image = super_res_latents
481
+ # done super res
482
+
483
+ # post processing
484
+
485
+ image = image * 0.5 + 0.5
486
+ image = image.clamp(0, 1)
487
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
488
+
489
+ if output_type == "pil":
490
+ image = self.numpy_to_pil(image)
491
+
492
+ if not return_dict:
493
+ return (image,)
494
+
495
+ return ImagePipelineOutput(images=image)
v0.19.2/unclip_text_interpolation.py ADDED
@@ -0,0 +1,573 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import List, Optional, Tuple, Union
3
+
4
+ import torch
5
+ from torch.nn import functional as F
6
+ from transformers import CLIPTextModelWithProjection, CLIPTokenizer
7
+ from transformers.models.clip.modeling_clip import CLIPTextModelOutput
8
+
9
+ from diffusers import (
10
+ DiffusionPipeline,
11
+ ImagePipelineOutput,
12
+ PriorTransformer,
13
+ UnCLIPScheduler,
14
+ UNet2DConditionModel,
15
+ UNet2DModel,
16
+ )
17
+ from diffusers.pipelines.unclip import UnCLIPTextProjModel
18
+ from diffusers.utils import is_accelerate_available, logging, randn_tensor
19
+
20
+
21
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
22
+
23
+
24
+ def slerp(val, low, high):
25
+ """
26
+ Find the interpolation point between the 'low' and 'high' values for the given 'val'. See https://en.wikipedia.org/wiki/Slerp for more details on the topic.
27
+ """
28
+ low_norm = low / torch.norm(low)
29
+ high_norm = high / torch.norm(high)
30
+ omega = torch.acos((low_norm * high_norm))
31
+ so = torch.sin(omega)
32
+ res = (torch.sin((1.0 - val) * omega) / so) * low + (torch.sin(val * omega) / so) * high
33
+ return res
34
+
35
+
36
+ class UnCLIPTextInterpolationPipeline(DiffusionPipeline):
37
+
38
+ """
39
+ Pipeline for prompt-to-prompt interpolation on CLIP text embeddings and using the UnCLIP / Dall-E to decode them to images.
40
+
41
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
42
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
43
+
44
+ Args:
45
+ text_encoder ([`CLIPTextModelWithProjection`]):
46
+ Frozen text-encoder.
47
+ tokenizer (`CLIPTokenizer`):
48
+ Tokenizer of class
49
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
50
+ prior ([`PriorTransformer`]):
51
+ The canonincal unCLIP prior to approximate the image embedding from the text embedding.
52
+ text_proj ([`UnCLIPTextProjModel`]):
53
+ Utility class to prepare and combine the embeddings before they are passed to the decoder.
54
+ decoder ([`UNet2DConditionModel`]):
55
+ The decoder to invert the image embedding into an image.
56
+ super_res_first ([`UNet2DModel`]):
57
+ Super resolution unet. Used in all but the last step of the super resolution diffusion process.
58
+ super_res_last ([`UNet2DModel`]):
59
+ Super resolution unet. Used in the last step of the super resolution diffusion process.
60
+ prior_scheduler ([`UnCLIPScheduler`]):
61
+ Scheduler used in the prior denoising process. Just a modified DDPMScheduler.
62
+ decoder_scheduler ([`UnCLIPScheduler`]):
63
+ Scheduler used in the decoder denoising process. Just a modified DDPMScheduler.
64
+ super_res_scheduler ([`UnCLIPScheduler`]):
65
+ Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler.
66
+
67
+ """
68
+
69
+ prior: PriorTransformer
70
+ decoder: UNet2DConditionModel
71
+ text_proj: UnCLIPTextProjModel
72
+ text_encoder: CLIPTextModelWithProjection
73
+ tokenizer: CLIPTokenizer
74
+ super_res_first: UNet2DModel
75
+ super_res_last: UNet2DModel
76
+
77
+ prior_scheduler: UnCLIPScheduler
78
+ decoder_scheduler: UnCLIPScheduler
79
+ super_res_scheduler: UnCLIPScheduler
80
+
81
+ # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.__init__
82
+ def __init__(
83
+ self,
84
+ prior: PriorTransformer,
85
+ decoder: UNet2DConditionModel,
86
+ text_encoder: CLIPTextModelWithProjection,
87
+ tokenizer: CLIPTokenizer,
88
+ text_proj: UnCLIPTextProjModel,
89
+ super_res_first: UNet2DModel,
90
+ super_res_last: UNet2DModel,
91
+ prior_scheduler: UnCLIPScheduler,
92
+ decoder_scheduler: UnCLIPScheduler,
93
+ super_res_scheduler: UnCLIPScheduler,
94
+ ):
95
+ super().__init__()
96
+
97
+ self.register_modules(
98
+ prior=prior,
99
+ decoder=decoder,
100
+ text_encoder=text_encoder,
101
+ tokenizer=tokenizer,
102
+ text_proj=text_proj,
103
+ super_res_first=super_res_first,
104
+ super_res_last=super_res_last,
105
+ prior_scheduler=prior_scheduler,
106
+ decoder_scheduler=decoder_scheduler,
107
+ super_res_scheduler=super_res_scheduler,
108
+ )
109
+
110
+ # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
111
+ def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
112
+ if latents is None:
113
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
114
+ else:
115
+ if latents.shape != shape:
116
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
117
+ latents = latents.to(device)
118
+
119
+ latents = latents * scheduler.init_noise_sigma
120
+ return latents
121
+
122
+ # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._encode_prompt
123
+ def _encode_prompt(
124
+ self,
125
+ prompt,
126
+ device,
127
+ num_images_per_prompt,
128
+ do_classifier_free_guidance,
129
+ text_model_output: Optional[Union[CLIPTextModelOutput, Tuple]] = None,
130
+ text_attention_mask: Optional[torch.Tensor] = None,
131
+ ):
132
+ if text_model_output is None:
133
+ batch_size = len(prompt) if isinstance(prompt, list) else 1
134
+ # get prompt text embeddings
135
+ text_inputs = self.tokenizer(
136
+ prompt,
137
+ padding="max_length",
138
+ max_length=self.tokenizer.model_max_length,
139
+ truncation=True,
140
+ return_tensors="pt",
141
+ )
142
+ text_input_ids = text_inputs.input_ids
143
+ text_mask = text_inputs.attention_mask.bool().to(device)
144
+
145
+ untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
146
+
147
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
148
+ text_input_ids, untruncated_ids
149
+ ):
150
+ removed_text = self.tokenizer.batch_decode(
151
+ untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
152
+ )
153
+ logger.warning(
154
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
155
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
156
+ )
157
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
158
+
159
+ text_encoder_output = self.text_encoder(text_input_ids.to(device))
160
+
161
+ prompt_embeds = text_encoder_output.text_embeds
162
+ text_encoder_hidden_states = text_encoder_output.last_hidden_state
163
+
164
+ else:
165
+ batch_size = text_model_output[0].shape[0]
166
+ prompt_embeds, text_encoder_hidden_states = text_model_output[0], text_model_output[1]
167
+ text_mask = text_attention_mask
168
+
169
+ prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
170
+ text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)
171
+ text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0)
172
+
173
+ if do_classifier_free_guidance:
174
+ uncond_tokens = [""] * batch_size
175
+
176
+ uncond_input = self.tokenizer(
177
+ uncond_tokens,
178
+ padding="max_length",
179
+ max_length=self.tokenizer.model_max_length,
180
+ truncation=True,
181
+ return_tensors="pt",
182
+ )
183
+ uncond_text_mask = uncond_input.attention_mask.bool().to(device)
184
+ negative_prompt_embeds_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device))
185
+
186
+ negative_prompt_embeds = negative_prompt_embeds_text_encoder_output.text_embeds
187
+ uncond_text_encoder_hidden_states = negative_prompt_embeds_text_encoder_output.last_hidden_state
188
+
189
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
190
+
191
+ seq_len = negative_prompt_embeds.shape[1]
192
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt)
193
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len)
194
+
195
+ seq_len = uncond_text_encoder_hidden_states.shape[1]
196
+ uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1)
197
+ uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view(
198
+ batch_size * num_images_per_prompt, seq_len, -1
199
+ )
200
+ uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0)
201
+
202
+ # done duplicates
203
+
204
+ # For classifier free guidance, we need to do two forward passes.
205
+ # Here we concatenate the unconditional and text embeddings into a single batch
206
+ # to avoid doing two forward passes
207
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
208
+ text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states])
209
+
210
+ text_mask = torch.cat([uncond_text_mask, text_mask])
211
+
212
+ return prompt_embeds, text_encoder_hidden_states, text_mask
213
+
214
+ # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.enable_sequential_cpu_offload
215
+ def enable_sequential_cpu_offload(self, gpu_id=0):
216
+ r"""
217
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline's
218
+ models have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only
219
+ when their specific submodule has its `forward` method called.
220
+ """
221
+ if is_accelerate_available():
222
+ from accelerate import cpu_offload
223
+ else:
224
+ raise ImportError("Please install accelerate via `pip install accelerate`")
225
+
226
+ device = torch.device(f"cuda:{gpu_id}")
227
+
228
+ # TODO: self.prior.post_process_latents is not covered by the offload hooks, so it fails if added to the list
229
+ models = [
230
+ self.decoder,
231
+ self.text_proj,
232
+ self.text_encoder,
233
+ self.super_res_first,
234
+ self.super_res_last,
235
+ ]
236
+ for cpu_offloaded_model in models:
237
+ if cpu_offloaded_model is not None:
238
+ cpu_offload(cpu_offloaded_model, device)
239
+
240
+ @property
241
+ # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._execution_device
242
+ def _execution_device(self):
243
+ r"""
244
+ Returns the device on which the pipeline's models will be executed. After calling
245
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
246
+ hooks.
247
+ """
248
+ if self.device != torch.device("meta") or not hasattr(self.decoder, "_hf_hook"):
249
+ return self.device
250
+ for module in self.decoder.modules():
251
+ if (
252
+ hasattr(module, "_hf_hook")
253
+ and hasattr(module._hf_hook, "execution_device")
254
+ and module._hf_hook.execution_device is not None
255
+ ):
256
+ return torch.device(module._hf_hook.execution_device)
257
+ return self.device
258
+
259
+ @torch.no_grad()
260
+ def __call__(
261
+ self,
262
+ start_prompt: str,
263
+ end_prompt: str,
264
+ steps: int = 5,
265
+ prior_num_inference_steps: int = 25,
266
+ decoder_num_inference_steps: int = 25,
267
+ super_res_num_inference_steps: int = 7,
268
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
269
+ prior_guidance_scale: float = 4.0,
270
+ decoder_guidance_scale: float = 8.0,
271
+ enable_sequential_cpu_offload=True,
272
+ gpu_id=0,
273
+ output_type: Optional[str] = "pil",
274
+ return_dict: bool = True,
275
+ ):
276
+ """
277
+ Function invoked when calling the pipeline for generation.
278
+
279
+ Args:
280
+ start_prompt (`str`):
281
+ The prompt to start the image generation interpolation from.
282
+ end_prompt (`str`):
283
+ The prompt to end the image generation interpolation at.
284
+ steps (`int`, *optional*, defaults to 5):
285
+ The number of steps over which to interpolate from start_prompt to end_prompt. The pipeline returns
286
+ the same number of images as this value.
287
+ prior_num_inference_steps (`int`, *optional*, defaults to 25):
288
+ The number of denoising steps for the prior. More denoising steps usually lead to a higher quality
289
+ image at the expense of slower inference.
290
+ decoder_num_inference_steps (`int`, *optional*, defaults to 25):
291
+ The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality
292
+ image at the expense of slower inference.
293
+ super_res_num_inference_steps (`int`, *optional*, defaults to 7):
294
+ The number of denoising steps for super resolution. More denoising steps usually lead to a higher
295
+ quality image at the expense of slower inference.
296
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
297
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
298
+ to make generation deterministic.
299
+ prior_guidance_scale (`float`, *optional*, defaults to 4.0):
300
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
301
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
302
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
303
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
304
+ usually at the expense of lower image quality.
305
+ decoder_guidance_scale (`float`, *optional*, defaults to 4.0):
306
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
307
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
308
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
309
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
310
+ usually at the expense of lower image quality.
311
+ output_type (`str`, *optional*, defaults to `"pil"`):
312
+ The output format of the generated image. Choose between
313
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
314
+ enable_sequential_cpu_offload (`bool`, *optional*, defaults to `True`):
315
+ If True, offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline's
316
+ models have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only
317
+ when their specific submodule has its `forward` method called.
318
+ gpu_id (`int`, *optional*, defaults to `0`):
319
+ The gpu_id to be passed to enable_sequential_cpu_offload. Only works when enable_sequential_cpu_offload is set to True.
320
+ return_dict (`bool`, *optional*, defaults to `True`):
321
+ Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
322
+ """
323
+
324
+ if not isinstance(start_prompt, str) or not isinstance(end_prompt, str):
325
+ raise ValueError(
326
+ f"`start_prompt` and `end_prompt` should be of type `str` but got {type(start_prompt)} and"
327
+ f" {type(end_prompt)} instead"
328
+ )
329
+
330
+ if enable_sequential_cpu_offload:
331
+ self.enable_sequential_cpu_offload(gpu_id=gpu_id)
332
+
333
+ device = self._execution_device
334
+
335
+ # Turn the prompts into embeddings.
336
+ inputs = self.tokenizer(
337
+ [start_prompt, end_prompt],
338
+ padding="max_length",
339
+ truncation=True,
340
+ max_length=self.tokenizer.model_max_length,
341
+ return_tensors="pt",
342
+ )
343
+ inputs.to(device)
344
+ text_model_output = self.text_encoder(**inputs)
345
+
346
+ text_attention_mask = torch.max(inputs.attention_mask[0], inputs.attention_mask[1])
347
+ text_attention_mask = torch.cat([text_attention_mask.unsqueeze(0)] * steps).to(device)
348
+
349
+ # Interpolate from the start to end prompt using slerp and add the generated images to an image output pipeline
350
+ batch_text_embeds = []
351
+ batch_last_hidden_state = []
352
+
353
+ for interp_val in torch.linspace(0, 1, steps):
354
+ text_embeds = slerp(interp_val, text_model_output.text_embeds[0], text_model_output.text_embeds[1])
355
+ last_hidden_state = slerp(
356
+ interp_val, text_model_output.last_hidden_state[0], text_model_output.last_hidden_state[1]
357
+ )
358
+ batch_text_embeds.append(text_embeds.unsqueeze(0))
359
+ batch_last_hidden_state.append(last_hidden_state.unsqueeze(0))
360
+
361
+ batch_text_embeds = torch.cat(batch_text_embeds)
362
+ batch_last_hidden_state = torch.cat(batch_last_hidden_state)
363
+
364
+ text_model_output = CLIPTextModelOutput(
365
+ text_embeds=batch_text_embeds, last_hidden_state=batch_last_hidden_state
366
+ )
367
+
368
+ batch_size = text_model_output[0].shape[0]
369
+
370
+ do_classifier_free_guidance = prior_guidance_scale > 1.0 or decoder_guidance_scale > 1.0
371
+
372
+ prompt_embeds, text_encoder_hidden_states, text_mask = self._encode_prompt(
373
+ prompt=None,
374
+ device=device,
375
+ num_images_per_prompt=1,
376
+ do_classifier_free_guidance=do_classifier_free_guidance,
377
+ text_model_output=text_model_output,
378
+ text_attention_mask=text_attention_mask,
379
+ )
380
+
381
+ # prior
382
+
383
+ self.prior_scheduler.set_timesteps(prior_num_inference_steps, device=device)
384
+ prior_timesteps_tensor = self.prior_scheduler.timesteps
385
+
386
+ embedding_dim = self.prior.config.embedding_dim
387
+
388
+ prior_latents = self.prepare_latents(
389
+ (batch_size, embedding_dim),
390
+ prompt_embeds.dtype,
391
+ device,
392
+ generator,
393
+ None,
394
+ self.prior_scheduler,
395
+ )
396
+
397
+ for i, t in enumerate(self.progress_bar(prior_timesteps_tensor)):
398
+ # expand the latents if we are doing classifier free guidance
399
+ latent_model_input = torch.cat([prior_latents] * 2) if do_classifier_free_guidance else prior_latents
400
+
401
+ predicted_image_embedding = self.prior(
402
+ latent_model_input,
403
+ timestep=t,
404
+ proj_embedding=prompt_embeds,
405
+ encoder_hidden_states=text_encoder_hidden_states,
406
+ attention_mask=text_mask,
407
+ ).predicted_image_embedding
408
+
409
+ if do_classifier_free_guidance:
410
+ predicted_image_embedding_uncond, predicted_image_embedding_text = predicted_image_embedding.chunk(2)
411
+ predicted_image_embedding = predicted_image_embedding_uncond + prior_guidance_scale * (
412
+ predicted_image_embedding_text - predicted_image_embedding_uncond
413
+ )
414
+
415
+ if i + 1 == prior_timesteps_tensor.shape[0]:
416
+ prev_timestep = None
417
+ else:
418
+ prev_timestep = prior_timesteps_tensor[i + 1]
419
+
420
+ prior_latents = self.prior_scheduler.step(
421
+ predicted_image_embedding,
422
+ timestep=t,
423
+ sample=prior_latents,
424
+ generator=generator,
425
+ prev_timestep=prev_timestep,
426
+ ).prev_sample
427
+
428
+ prior_latents = self.prior.post_process_latents(prior_latents)
429
+
430
+ image_embeddings = prior_latents
431
+
432
+ # done prior
433
+
434
+ # decoder
435
+
436
+ text_encoder_hidden_states, additive_clip_time_embeddings = self.text_proj(
437
+ image_embeddings=image_embeddings,
438
+ prompt_embeds=prompt_embeds,
439
+ text_encoder_hidden_states=text_encoder_hidden_states,
440
+ do_classifier_free_guidance=do_classifier_free_guidance,
441
+ )
442
+
443
+ if device.type == "mps":
444
+ # HACK: MPS: There is a panic when padding bool tensors,
445
+ # so cast to int tensor for the pad and back to bool afterwards
446
+ text_mask = text_mask.type(torch.int)
447
+ decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=1)
448
+ decoder_text_mask = decoder_text_mask.type(torch.bool)
449
+ else:
450
+ decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=True)
451
+
452
+ self.decoder_scheduler.set_timesteps(decoder_num_inference_steps, device=device)
453
+ decoder_timesteps_tensor = self.decoder_scheduler.timesteps
454
+
455
+ num_channels_latents = self.decoder.config.in_channels
456
+ height = self.decoder.config.sample_size
457
+ width = self.decoder.config.sample_size
458
+
459
+ decoder_latents = self.prepare_latents(
460
+ (batch_size, num_channels_latents, height, width),
461
+ text_encoder_hidden_states.dtype,
462
+ device,
463
+ generator,
464
+ None,
465
+ self.decoder_scheduler,
466
+ )
467
+
468
+ for i, t in enumerate(self.progress_bar(decoder_timesteps_tensor)):
469
+ # expand the latents if we are doing classifier free guidance
470
+ latent_model_input = torch.cat([decoder_latents] * 2) if do_classifier_free_guidance else decoder_latents
471
+
472
+ noise_pred = self.decoder(
473
+ sample=latent_model_input,
474
+ timestep=t,
475
+ encoder_hidden_states=text_encoder_hidden_states,
476
+ class_labels=additive_clip_time_embeddings,
477
+ attention_mask=decoder_text_mask,
478
+ ).sample
479
+
480
+ if do_classifier_free_guidance:
481
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
482
+ noise_pred_uncond, _ = noise_pred_uncond.split(latent_model_input.shape[1], dim=1)
483
+ noise_pred_text, predicted_variance = noise_pred_text.split(latent_model_input.shape[1], dim=1)
484
+ noise_pred = noise_pred_uncond + decoder_guidance_scale * (noise_pred_text - noise_pred_uncond)
485
+ noise_pred = torch.cat([noise_pred, predicted_variance], dim=1)
486
+
487
+ if i + 1 == decoder_timesteps_tensor.shape[0]:
488
+ prev_timestep = None
489
+ else:
490
+ prev_timestep = decoder_timesteps_tensor[i + 1]
491
+
492
+ # compute the previous noisy sample x_t -> x_t-1
493
+ decoder_latents = self.decoder_scheduler.step(
494
+ noise_pred, t, decoder_latents, prev_timestep=prev_timestep, generator=generator
495
+ ).prev_sample
496
+
497
+ decoder_latents = decoder_latents.clamp(-1, 1)
498
+
499
+ image_small = decoder_latents
500
+
501
+ # done decoder
502
+
503
+ # super res
504
+
505
+ self.super_res_scheduler.set_timesteps(super_res_num_inference_steps, device=device)
506
+ super_res_timesteps_tensor = self.super_res_scheduler.timesteps
507
+
508
+ channels = self.super_res_first.config.in_channels // 2
509
+ height = self.super_res_first.config.sample_size
510
+ width = self.super_res_first.config.sample_size
511
+
512
+ super_res_latents = self.prepare_latents(
513
+ (batch_size, channels, height, width),
514
+ image_small.dtype,
515
+ device,
516
+ generator,
517
+ None,
518
+ self.super_res_scheduler,
519
+ )
520
+
521
+ if device.type == "mps":
522
+ # MPS does not support many interpolations
523
+ image_upscaled = F.interpolate(image_small, size=[height, width])
524
+ else:
525
+ interpolate_antialias = {}
526
+ if "antialias" in inspect.signature(F.interpolate).parameters:
527
+ interpolate_antialias["antialias"] = True
528
+
529
+ image_upscaled = F.interpolate(
530
+ image_small, size=[height, width], mode="bicubic", align_corners=False, **interpolate_antialias
531
+ )
532
+
533
+ for i, t in enumerate(self.progress_bar(super_res_timesteps_tensor)):
534
+ # no classifier free guidance
535
+
536
+ if i == super_res_timesteps_tensor.shape[0] - 1:
537
+ unet = self.super_res_last
538
+ else:
539
+ unet = self.super_res_first
540
+
541
+ latent_model_input = torch.cat([super_res_latents, image_upscaled], dim=1)
542
+
543
+ noise_pred = unet(
544
+ sample=latent_model_input,
545
+ timestep=t,
546
+ ).sample
547
+
548
+ if i + 1 == super_res_timesteps_tensor.shape[0]:
549
+ prev_timestep = None
550
+ else:
551
+ prev_timestep = super_res_timesteps_tensor[i + 1]
552
+
553
+ # compute the previous noisy sample x_t -> x_t-1
554
+ super_res_latents = self.super_res_scheduler.step(
555
+ noise_pred, t, super_res_latents, prev_timestep=prev_timestep, generator=generator
556
+ ).prev_sample
557
+
558
+ image = super_res_latents
559
+ # done super res
560
+
561
+ # post processing
562
+
563
+ image = image * 0.5 + 0.5
564
+ image = image.clamp(0, 1)
565
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
566
+
567
+ if output_type == "pil":
568
+ image = self.numpy_to_pil(image)
569
+
570
+ if not return_dict:
571
+ return (image,)
572
+
573
+ return ImagePipelineOutput(images=image)
v0.19.2/wildcard_stable_diffusion.py ADDED
@@ -0,0 +1,418 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import os
3
+ import random
4
+ import re
5
+ from dataclasses import dataclass
6
+ from typing import Callable, Dict, List, Optional, Union
7
+
8
+ import torch
9
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
10
+
11
+ from diffusers import DiffusionPipeline
12
+ from diffusers.configuration_utils import FrozenDict
13
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
14
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
15
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
16
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
17
+ from diffusers.utils import deprecate, logging
18
+
19
+
20
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
21
+
22
+ global_re_wildcard = re.compile(r"__([^_]*)__")
23
+
24
+
25
+ def get_filename(path: str):
26
+ # this doesn't work on Windows
27
+ return os.path.basename(path).split(".txt")[0]
28
+
29
+
30
+ def read_wildcard_values(path: str):
31
+ with open(path, encoding="utf8") as f:
32
+ return f.read().splitlines()
33
+
34
+
35
+ def grab_wildcard_values(wildcard_option_dict: Dict[str, List[str]] = {}, wildcard_files: List[str] = []):
36
+ for wildcard_file in wildcard_files:
37
+ filename = get_filename(wildcard_file)
38
+ read_values = read_wildcard_values(wildcard_file)
39
+ if filename not in wildcard_option_dict:
40
+ wildcard_option_dict[filename] = []
41
+ wildcard_option_dict[filename].extend(read_values)
42
+ return wildcard_option_dict
43
+
44
+
45
+ def replace_prompt_with_wildcards(
46
+ prompt: str, wildcard_option_dict: Dict[str, List[str]] = {}, wildcard_files: List[str] = []
47
+ ):
48
+ new_prompt = prompt
49
+
50
+ # get wildcard options
51
+ wildcard_option_dict = grab_wildcard_values(wildcard_option_dict, wildcard_files)
52
+
53
+ for m in global_re_wildcard.finditer(new_prompt):
54
+ wildcard_value = m.group()
55
+ replace_value = random.choice(wildcard_option_dict[wildcard_value.strip("__")])
56
+ new_prompt = new_prompt.replace(wildcard_value, replace_value, 1)
57
+
58
+ return new_prompt
59
+
60
+
61
+ @dataclass
62
+ class WildcardStableDiffusionOutput(StableDiffusionPipelineOutput):
63
+ prompts: List[str]
64
+
65
+
66
+ class WildcardStableDiffusionPipeline(DiffusionPipeline):
67
+ r"""
68
+ Example Usage:
69
+ pipe = WildcardStableDiffusionPipeline.from_pretrained(
70
+ "CompVis/stable-diffusion-v1-4",
71
+
72
+ torch_dtype=torch.float16,
73
+ )
74
+ prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
75
+ out = pipe(
76
+ prompt,
77
+ wildcard_option_dict={
78
+ "clothing":["hat", "shirt", "scarf", "beret"]
79
+ },
80
+ wildcard_files=["object.txt", "animal.txt"],
81
+ num_prompt_samples=1
82
+ )
83
+
84
+
85
+ Pipeline for text-to-image generation with wild cards using Stable Diffusion.
86
+
87
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
88
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
89
+
90
+ Args:
91
+ vae ([`AutoencoderKL`]):
92
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
93
+ text_encoder ([`CLIPTextModel`]):
94
+ Frozen text-encoder. Stable Diffusion uses the text portion of
95
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
96
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
97
+ tokenizer (`CLIPTokenizer`):
98
+ Tokenizer of class
99
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
100
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
101
+ scheduler ([`SchedulerMixin`]):
102
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
103
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
104
+ safety_checker ([`StableDiffusionSafetyChecker`]):
105
+ Classification module that estimates whether generated images could be considered offensive or harmful.
106
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
107
+ feature_extractor ([`CLIPImageProcessor`]):
108
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
109
+ """
110
+
111
+ def __init__(
112
+ self,
113
+ vae: AutoencoderKL,
114
+ text_encoder: CLIPTextModel,
115
+ tokenizer: CLIPTokenizer,
116
+ unet: UNet2DConditionModel,
117
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
118
+ safety_checker: StableDiffusionSafetyChecker,
119
+ feature_extractor: CLIPImageProcessor,
120
+ ):
121
+ super().__init__()
122
+
123
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
124
+ deprecation_message = (
125
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
126
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
127
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
128
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
129
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
130
+ " file"
131
+ )
132
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
133
+ new_config = dict(scheduler.config)
134
+ new_config["steps_offset"] = 1
135
+ scheduler._internal_dict = FrozenDict(new_config)
136
+
137
+ if safety_checker is None:
138
+ logger.warning(
139
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
140
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
141
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
142
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
143
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
144
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
145
+ )
146
+
147
+ self.register_modules(
148
+ vae=vae,
149
+ text_encoder=text_encoder,
150
+ tokenizer=tokenizer,
151
+ unet=unet,
152
+ scheduler=scheduler,
153
+ safety_checker=safety_checker,
154
+ feature_extractor=feature_extractor,
155
+ )
156
+
157
+ @torch.no_grad()
158
+ def __call__(
159
+ self,
160
+ prompt: Union[str, List[str]],
161
+ height: int = 512,
162
+ width: int = 512,
163
+ num_inference_steps: int = 50,
164
+ guidance_scale: float = 7.5,
165
+ negative_prompt: Optional[Union[str, List[str]]] = None,
166
+ num_images_per_prompt: Optional[int] = 1,
167
+ eta: float = 0.0,
168
+ generator: Optional[torch.Generator] = None,
169
+ latents: Optional[torch.FloatTensor] = None,
170
+ output_type: Optional[str] = "pil",
171
+ return_dict: bool = True,
172
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
173
+ callback_steps: int = 1,
174
+ wildcard_option_dict: Dict[str, List[str]] = {},
175
+ wildcard_files: List[str] = [],
176
+ num_prompt_samples: Optional[int] = 1,
177
+ **kwargs,
178
+ ):
179
+ r"""
180
+ Function invoked when calling the pipeline for generation.
181
+
182
+ Args:
183
+ prompt (`str` or `List[str]`):
184
+ The prompt or prompts to guide the image generation.
185
+ height (`int`, *optional*, defaults to 512):
186
+ The height in pixels of the generated image.
187
+ width (`int`, *optional*, defaults to 512):
188
+ The width in pixels of the generated image.
189
+ num_inference_steps (`int`, *optional*, defaults to 50):
190
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
191
+ expense of slower inference.
192
+ guidance_scale (`float`, *optional*, defaults to 7.5):
193
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
194
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
195
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
196
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
197
+ usually at the expense of lower image quality.
198
+ negative_prompt (`str` or `List[str]`, *optional*):
199
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
200
+ if `guidance_scale` is less than `1`).
201
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
202
+ The number of images to generate per prompt.
203
+ eta (`float`, *optional*, defaults to 0.0):
204
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
205
+ [`schedulers.DDIMScheduler`], will be ignored for others.
206
+ generator (`torch.Generator`, *optional*):
207
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
208
+ deterministic.
209
+ latents (`torch.FloatTensor`, *optional*):
210
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
211
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
212
+ tensor will ge generated by sampling using the supplied random `generator`.
213
+ output_type (`str`, *optional*, defaults to `"pil"`):
214
+ The output format of the generate image. Choose between
215
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
216
+ return_dict (`bool`, *optional*, defaults to `True`):
217
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
218
+ plain tuple.
219
+ callback (`Callable`, *optional*):
220
+ A function that will be called every `callback_steps` steps during inference. The function will be
221
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
222
+ callback_steps (`int`, *optional*, defaults to 1):
223
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
224
+ called at every step.
225
+ wildcard_option_dict (Dict[str, List[str]]):
226
+ dict with key as `wildcard` and values as a list of possible replacements. For example if a prompt, "A __animal__ sitting on a chair". A wildcard_option_dict can provide possible values for "animal" like this: {"animal":["dog", "cat", "fox"]}
227
+ wildcard_files: (List[str])
228
+ List of filenames of txt files for wildcard replacements. For example if a prompt, "A __animal__ sitting on a chair". A file can be provided ["animal.txt"]
229
+ num_prompt_samples: int
230
+ Number of times to sample wildcards for each prompt provided
231
+
232
+ Returns:
233
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
234
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
235
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
236
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
237
+ (nsfw) content, according to the `safety_checker`.
238
+ """
239
+
240
+ if isinstance(prompt, str):
241
+ prompt = [
242
+ replace_prompt_with_wildcards(prompt, wildcard_option_dict, wildcard_files)
243
+ for i in range(num_prompt_samples)
244
+ ]
245
+ batch_size = len(prompt)
246
+ elif isinstance(prompt, list):
247
+ prompt_list = []
248
+ for p in prompt:
249
+ for i in range(num_prompt_samples):
250
+ prompt_list.append(replace_prompt_with_wildcards(p, wildcard_option_dict, wildcard_files))
251
+ prompt = prompt_list
252
+ batch_size = len(prompt)
253
+ else:
254
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
255
+
256
+ if height % 8 != 0 or width % 8 != 0:
257
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
258
+
259
+ if (callback_steps is None) or (
260
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
261
+ ):
262
+ raise ValueError(
263
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
264
+ f" {type(callback_steps)}."
265
+ )
266
+
267
+ # get prompt text embeddings
268
+ text_inputs = self.tokenizer(
269
+ prompt,
270
+ padding="max_length",
271
+ max_length=self.tokenizer.model_max_length,
272
+ return_tensors="pt",
273
+ )
274
+ text_input_ids = text_inputs.input_ids
275
+
276
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
277
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
278
+ logger.warning(
279
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
280
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
281
+ )
282
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
283
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
284
+
285
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
286
+ bs_embed, seq_len, _ = text_embeddings.shape
287
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
288
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
289
+
290
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
291
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
292
+ # corresponds to doing no classifier free guidance.
293
+ do_classifier_free_guidance = guidance_scale > 1.0
294
+ # get unconditional embeddings for classifier free guidance
295
+ if do_classifier_free_guidance:
296
+ uncond_tokens: List[str]
297
+ if negative_prompt is None:
298
+ uncond_tokens = [""] * batch_size
299
+ elif type(prompt) is not type(negative_prompt):
300
+ raise TypeError(
301
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
302
+ f" {type(prompt)}."
303
+ )
304
+ elif isinstance(negative_prompt, str):
305
+ uncond_tokens = [negative_prompt]
306
+ elif batch_size != len(negative_prompt):
307
+ raise ValueError(
308
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
309
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
310
+ " the batch size of `prompt`."
311
+ )
312
+ else:
313
+ uncond_tokens = negative_prompt
314
+
315
+ max_length = text_input_ids.shape[-1]
316
+ uncond_input = self.tokenizer(
317
+ uncond_tokens,
318
+ padding="max_length",
319
+ max_length=max_length,
320
+ truncation=True,
321
+ return_tensors="pt",
322
+ )
323
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
324
+
325
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
326
+ seq_len = uncond_embeddings.shape[1]
327
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
328
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
329
+
330
+ # For classifier free guidance, we need to do two forward passes.
331
+ # Here we concatenate the unconditional and text embeddings into a single batch
332
+ # to avoid doing two forward passes
333
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
334
+
335
+ # get the initial random noise unless the user supplied it
336
+
337
+ # Unlike in other pipelines, latents need to be generated in the target device
338
+ # for 1-to-1 results reproducibility with the CompVis implementation.
339
+ # However this currently doesn't work in `mps`.
340
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8)
341
+ latents_dtype = text_embeddings.dtype
342
+ if latents is None:
343
+ if self.device.type == "mps":
344
+ # randn does not exist on mps
345
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
346
+ self.device
347
+ )
348
+ else:
349
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
350
+ else:
351
+ if latents.shape != latents_shape:
352
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
353
+ latents = latents.to(self.device)
354
+
355
+ # set timesteps
356
+ self.scheduler.set_timesteps(num_inference_steps)
357
+
358
+ # Some schedulers like PNDM have timesteps as arrays
359
+ # It's more optimized to move all timesteps to correct device beforehand
360
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
361
+
362
+ # scale the initial noise by the standard deviation required by the scheduler
363
+ latents = latents * self.scheduler.init_noise_sigma
364
+
365
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
366
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
367
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
368
+ # and should be between [0, 1]
369
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
370
+ extra_step_kwargs = {}
371
+ if accepts_eta:
372
+ extra_step_kwargs["eta"] = eta
373
+
374
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
375
+ # expand the latents if we are doing classifier free guidance
376
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
377
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
378
+
379
+ # predict the noise residual
380
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
381
+
382
+ # perform guidance
383
+ if do_classifier_free_guidance:
384
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
385
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
386
+
387
+ # compute the previous noisy sample x_t -> x_t-1
388
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
389
+
390
+ # call the callback, if provided
391
+ if callback is not None and i % callback_steps == 0:
392
+ callback(i, t, latents)
393
+
394
+ latents = 1 / 0.18215 * latents
395
+ image = self.vae.decode(latents).sample
396
+
397
+ image = (image / 2 + 0.5).clamp(0, 1)
398
+
399
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
400
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
401
+
402
+ if self.safety_checker is not None:
403
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
404
+ self.device
405
+ )
406
+ image, has_nsfw_concept = self.safety_checker(
407
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
408
+ )
409
+ else:
410
+ has_nsfw_concept = None
411
+
412
+ if output_type == "pil":
413
+ image = self.numpy_to_pil(image)
414
+
415
+ if not return_dict:
416
+ return (image, has_nsfw_concept)
417
+
418
+ return WildcardStableDiffusionOutput(images=image, nsfw_content_detected=has_nsfw_concept, prompts=prompt)