Datasets:

ArXiv:
Diffusers Bot commited on
Commit
f0cae4b
1 Parent(s): 4a97de5

Upload folder using huggingface_hub

Browse files
v0.11.1/README.md ADDED
@@ -0,0 +1,818 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Community Examples
2
+
3
+ > **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).**
4
+
5
+ **Community** examples consist of both inference and training examples that have been added by the community.
6
+ Please have a look at the following table to get an overview of all community examples. Click on the **Code Example** to get a copy-and-paste ready code example that you can try out.
7
+ If a community doesn't work as expected, please open an issue and ping the author on it.
8
+
9
+ | Example | Description | Code Example | Colab | Author |
10
+ |:---------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------:|
11
+ | CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion | [CLIP Guided Stable Diffusion](#clip-guided-stable-diffusion) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) | [Suraj Patil](https://github.com/patil-suraj/) |
12
+ | One Step U-Net (Dummy) | Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) | [One Step U-Net](#one-step-unet) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
13
+ | Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Stable Diffusion Interpolation](#stable-diffusion-interpolation) | - | [Nate Raw](https://github.com/nateraw/) |
14
+ | Stable Diffusion Mega | **One** Stable Diffusion Pipeline with all functionalities of [Text2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py), [Image2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) and [Inpainting](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | [Stable Diffusion Mega](#stable-diffusion-mega) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
15
+ | Long Prompt Weighting Stable Diffusion | **One** Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. | [Long Prompt Weighting Stable Diffusion](#long-prompt-weighting-stable-diffusion) | - | [SkyTNT](https://github.com/SkyTNT) |
16
+ | Speech to Image | Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images | [Speech to Image](#speech-to-image) | - | [Mikail Duzenli](https://github.com/MikailINTech)
17
+ | Wild Card Stable Diffusion | Stable Diffusion Pipeline that supports prompts that contain wildcard terms (indicated by surrounding double underscores), with values instantiated randomly from a corresponding txt file or a dictionary of possible values | [Wildcard Stable Diffusion](#wildcard-stable-diffusion) | - | [Shyam Sudhakaran](https://github.com/shyamsn97) |
18
+ | [Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) | Stable Diffusion Pipeline that supports prompts that contain "|" in prompts (as an AND condition) and weights (separated by "|" as well) to positively / negatively weight prompts. | [Composable Stable Diffusion](#composable-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
19
+ | Seed Resizing Stable Diffusion| Stable Diffusion Pipeline that supports resizing an image and retaining the concepts of the 512 by 512 generation. | [Seed Resizing](#seed-resizing) | - | [Mark Rich](https://github.com/MarkRich) |
20
+ | Imagic Stable Diffusion | Stable Diffusion Pipeline that enables writing a text prompt to edit an existing image| [Imagic Stable Diffusion](#imagic-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
21
+ | Multilingual Stable Diffusion| Stable Diffusion Pipeline that supports prompts in 50 different languages. | [Multilingual Stable Diffusion](#multilingual-stable-diffusion-pipeline) | - | [Juan Carlos Piñeros](https://github.com/juancopi81) |
22
+ | Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting| [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) |
23
+ | Text Based Inpainting Stable Diffusion | Stable Diffusion Inpainting Pipeline that enables passing a text prompt to generate the mask for inpainting| [Text Based Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Dhruv Karan](https://github.com/unography) |
24
+ | Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - |[Stuti R.](https://github.com/kingstut) |
25
+ | K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
26
+ | Checkpoint Merger Pipeline | Diffusion Pipeline that enables merging of saved model checkpoints | [Checkpoint Merger Pipeline](#checkpoint-merger-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
27
+ Stable Diffusion v1.1-1.4 Comparison | Run all 4 model checkpoints for Stable Diffusion and compare their results together | [Stable Diffusion Comparison](#stable-diffusion-comparisons) | - | [Suvaditya Mukherjee](https://github.com/suvadityamuk) |
28
+
29
+
30
+
31
+ To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
32
+ ```py
33
+ pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="filename_in_the_community_folder")
34
+ ```
35
+
36
+ ## Example usages
37
+
38
+ ### CLIP Guided Stable Diffusion
39
+
40
+ CLIP guided stable diffusion can help to generate more realistic images
41
+ by guiding stable diffusion at every denoising step with an additional CLIP model.
42
+
43
+ The following code requires roughly 12GB of GPU RAM.
44
+
45
+ ```python
46
+ from diffusers import DiffusionPipeline
47
+ from transformers import CLIPFeatureExtractor, CLIPModel
48
+ import torch
49
+
50
+
51
+ feature_extractor = CLIPFeatureExtractor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K")
52
+ clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16)
53
+
54
+
55
+ guided_pipeline = DiffusionPipeline.from_pretrained(
56
+ "runwayml/stable-diffusion-v1-5",
57
+ custom_pipeline="clip_guided_stable_diffusion",
58
+ clip_model=clip_model,
59
+ feature_extractor=feature_extractor,
60
+
61
+ torch_dtype=torch.float16,
62
+ )
63
+ guided_pipeline.enable_attention_slicing()
64
+ guided_pipeline = guided_pipeline.to("cuda")
65
+
66
+ prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
67
+
68
+ generator = torch.Generator(device="cuda").manual_seed(0)
69
+ images = []
70
+ for i in range(4):
71
+ image = guided_pipeline(
72
+ prompt,
73
+ num_inference_steps=50,
74
+ guidance_scale=7.5,
75
+ clip_guidance_scale=100,
76
+ num_cutouts=4,
77
+ use_cutouts=False,
78
+ generator=generator,
79
+ ).images[0]
80
+ images.append(image)
81
+
82
+ # save images locally
83
+ for i, img in enumerate(images):
84
+ img.save(f"./clip_guided_sd/image_{i}.png")
85
+ ```
86
+
87
+ The `images` list contains a list of PIL images that can be saved locally or displayed directly in a google colab.
88
+ Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images:
89
+
90
+ ![clip_guidance](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/clip_guidance/merged_clip_guidance.jpg).
91
+
92
+ ### One Step Unet
93
+
94
+ The dummy "one-step-unet" can be run as follows:
95
+
96
+ ```python
97
+ from diffusers import DiffusionPipeline
98
+
99
+ pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet")
100
+ pipe()
101
+ ```
102
+
103
+ **Note**: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see https://github.com/huggingface/diffusers/issues/841).
104
+
105
+ ### Stable Diffusion Interpolation
106
+
107
+ The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes.
108
+
109
+ ```python
110
+ from diffusers import DiffusionPipeline
111
+ import torch
112
+
113
+ pipe = DiffusionPipeline.from_pretrained(
114
+ "CompVis/stable-diffusion-v1-4",
115
+ revision='fp16',
116
+ torch_dtype=torch.float16,
117
+ safety_checker=None, # Very important for videos...lots of false positives while interpolating
118
+ custom_pipeline="interpolate_stable_diffusion",
119
+ ).to('cuda')
120
+ pipe.enable_attention_slicing()
121
+
122
+ frame_filepaths = pipe.walk(
123
+ prompts=['a dog', 'a cat', 'a horse'],
124
+ seeds=[42, 1337, 1234],
125
+ num_interpolation_steps=16,
126
+ output_dir='./dreams',
127
+ batch_size=4,
128
+ height=512,
129
+ width=512,
130
+ guidance_scale=8.5,
131
+ num_inference_steps=50,
132
+ )
133
+ ```
134
+
135
+ The output of the `walk(...)` function returns a list of images saved under the folder as defined in `output_dir`. You can use these images to create videos of stable diffusion.
136
+
137
+ > **Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality.**
138
+
139
+ ### Stable Diffusion Mega
140
+
141
+ The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class.
142
+
143
+ ```python
144
+ #!/usr/bin/env python3
145
+ from diffusers import DiffusionPipeline
146
+ import PIL
147
+ import requests
148
+ from io import BytesIO
149
+ import torch
150
+
151
+
152
+ def download_image(url):
153
+ response = requests.get(url)
154
+ return PIL.Image.open(BytesIO(response.content)).convert("RGB")
155
+
156
+ pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_mega", torch_dtype=torch.float16, revision="fp16")
157
+ pipe.to("cuda")
158
+ pipe.enable_attention_slicing()
159
+
160
+
161
+ ### Text-to-Image
162
+
163
+ images = pipe.text2img("An astronaut riding a horse").images
164
+
165
+ ### Image-to-Image
166
+
167
+ init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
168
+
169
+ prompt = "A fantasy landscape, trending on artstation"
170
+
171
+ images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
172
+
173
+ ### Inpainting
174
+
175
+ img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
176
+ mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
177
+ init_image = download_image(img_url).resize((512, 512))
178
+ mask_image = download_image(mask_url).resize((512, 512))
179
+
180
+ prompt = "a cat sitting on a bench"
181
+ images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images
182
+ ```
183
+
184
+ As shown above this one pipeline can run all both "text-to-image", "image-to-image", and "inpainting" in one pipeline.
185
+
186
+ ### Long Prompt Weighting Stable Diffusion
187
+ Features of this custom pipeline:
188
+ - Input a prompt without the 77 token length limit.
189
+ - Includes tx2img, img2img. and inpainting pipelines.
190
+ - Emphasize/weigh part of your prompt with parentheses as so: `a baby deer with (big eyes)`
191
+ - De-emphasize part of your prompt as so: `a [baby] deer with big eyes`
192
+ - Precisely weigh part of your prompt as so: `a baby deer with (big eyes:1.3)`
193
+
194
+ Prompt weighting equivalents:
195
+ - `a baby deer with` == `(a baby deer with:1.0)`
196
+ - `(big eyes)` == `(big eyes:1.1)`
197
+ - `((big eyes))` == `(big eyes:1.21)`
198
+ - `[big eyes]` == `(big eyes:0.91)`
199
+
200
+ You can run this custom pipeline as so:
201
+
202
+ #### pytorch
203
+
204
+ ```python
205
+ from diffusers import DiffusionPipeline
206
+ import torch
207
+
208
+ pipe = DiffusionPipeline.from_pretrained(
209
+ 'hakurei/waifu-diffusion',
210
+ custom_pipeline="lpw_stable_diffusion",
211
+
212
+ torch_dtype=torch.float16
213
+ )
214
+ pipe=pipe.to("cuda")
215
+
216
+ prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms"
217
+ neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry"
218
+
219
+ pipe.text2img(prompt, negative_prompt=neg_prompt, width=512,height=512,max_embeddings_multiples=3).images[0]
220
+
221
+ ```
222
+
223
+ #### onnxruntime
224
+
225
+ ```python
226
+ from diffusers import DiffusionPipeline
227
+ import torch
228
+
229
+ pipe = DiffusionPipeline.from_pretrained(
230
+ 'CompVis/stable-diffusion-v1-4',
231
+ custom_pipeline="lpw_stable_diffusion_onnx",
232
+ revision="onnx",
233
+ provider="CUDAExecutionProvider"
234
+ )
235
+
236
+ prompt = "a photo of an astronaut riding a horse on mars, best quality"
237
+ neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
238
+
239
+ pipe.text2img(prompt,negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
240
+
241
+ ```
242
+
243
+ if you see `Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors`. Do not worry, it is normal.
244
+
245
+ ### Speech to Image
246
+
247
+ The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion.
248
+
249
+ ```Python
250
+ import torch
251
+
252
+ import matplotlib.pyplot as plt
253
+ from datasets import load_dataset
254
+ from diffusers import DiffusionPipeline
255
+ from transformers import (
256
+ WhisperForConditionalGeneration,
257
+ WhisperProcessor,
258
+ )
259
+
260
+
261
+ device = "cuda" if torch.cuda.is_available() else "cpu"
262
+
263
+ ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
264
+
265
+ audio_sample = ds[3]
266
+
267
+ text = audio_sample["text"].lower()
268
+ speech_data = audio_sample["audio"]["array"]
269
+
270
+ model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device)
271
+ processor = WhisperProcessor.from_pretrained("openai/whisper-small")
272
+
273
+ diffuser_pipeline = DiffusionPipeline.from_pretrained(
274
+ "CompVis/stable-diffusion-v1-4",
275
+ custom_pipeline="speech_to_image_diffusion",
276
+ speech_model=model,
277
+ speech_processor=processor,
278
+
279
+ torch_dtype=torch.float16,
280
+ )
281
+
282
+ diffuser_pipeline.enable_attention_slicing()
283
+ diffuser_pipeline = diffuser_pipeline.to(device)
284
+
285
+ output = diffuser_pipeline(speech_data)
286
+ plt.imshow(output.images[0])
287
+ ```
288
+ This example produces the following image:
289
+
290
+ ![image](https://user-images.githubusercontent.com/45072645/196901736-77d9c6fc-63ee-4072-90b0-dc8b903d63e3.png)
291
+
292
+ ### Wildcard Stable Diffusion
293
+ Following the great examples from https://github.com/jtkelm2/stable-diffusion-webui-1/blob/master/scripts/wildcards.py and https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts#wildcards, here's a minimal implementation that allows for users to add "wildcards", denoted by `__wildcard__` to prompts that are used as placeholders for randomly sampled values given by either a dictionary or a `.txt` file. For example:
294
+
295
+ Say we have a prompt:
296
+
297
+ ```
298
+ prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
299
+ ```
300
+
301
+ We can then define possible values to be sampled for `animal`, `object`, and `clothing`. These can either be from a `.txt` with the same name as the category.
302
+
303
+ The possible values can also be defined / combined by using a dictionary like: `{"animal":["dog", "cat", mouse"]}`.
304
+
305
+ The actual pipeline works just like `StableDiffusionPipeline`, except the `__call__` method takes in:
306
+
307
+ `wildcard_files`: list of file paths for wild card replacement
308
+ `wildcard_option_dict`: dict with key as `wildcard` and values as a list of possible replacements
309
+ `num_prompt_samples`: number of prompts to sample, uniformly sampling wildcards
310
+
311
+ A full example:
312
+
313
+ create `animal.txt`, with contents like:
314
+
315
+ ```
316
+ dog
317
+ cat
318
+ mouse
319
+ ```
320
+
321
+ create `object.txt`, with contents like:
322
+
323
+ ```
324
+ chair
325
+ sofa
326
+ bench
327
+ ```
328
+
329
+ ```python
330
+ from diffusers import DiffusionPipeline
331
+ import torch
332
+
333
+ pipe = DiffusionPipeline.from_pretrained(
334
+ "CompVis/stable-diffusion-v1-4",
335
+ custom_pipeline="wildcard_stable_diffusion",
336
+
337
+ torch_dtype=torch.float16,
338
+ )
339
+ prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
340
+ out = pipe(
341
+ prompt,
342
+ wildcard_option_dict={
343
+ "clothing":["hat", "shirt", "scarf", "beret"]
344
+ },
345
+ wildcard_files=["object.txt", "animal.txt"],
346
+ num_prompt_samples=1
347
+ )
348
+ ```
349
+
350
+ ### Composable Stable diffusion
351
+
352
+ [Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) proposes conjunction and negation (negative prompts) operators for compositional generation with conditional diffusion models.
353
+
354
+ ```python
355
+ import torch as th
356
+ import numpy as np
357
+ import torchvision.utils as tvu
358
+ from diffusers import DiffusionPipeline
359
+
360
+ has_cuda = th.cuda.is_available()
361
+ device = th.device('cpu' if not has_cuda else 'cuda')
362
+
363
+ pipe = DiffusionPipeline.from_pretrained(
364
+ "CompVis/stable-diffusion-v1-4",
365
+ use_auth_token=True,
366
+ custom_pipeline="composable_stable_diffusion",
367
+ ).to(device)
368
+
369
+
370
+ def dummy(images, **kwargs):
371
+ return images, False
372
+
373
+ pipe.safety_checker = dummy
374
+
375
+ images = []
376
+ generator = torch.Generator("cuda").manual_seed(0)
377
+
378
+ seed = 0
379
+ prompt = "a forest | a camel"
380
+ weights = " 1 | 1" # Equal weight to each prompt. Can be negative
381
+
382
+ images = []
383
+ for i in range(4):
384
+ res = pipe(
385
+ prompt,
386
+ guidance_scale=7.5,
387
+ num_inference_steps=50,
388
+ weights=weights,
389
+ generator=generator)
390
+ image = res.images[0]
391
+ images.append(image)
392
+
393
+ for i, img in enumerate(images):
394
+ img.save(f"./composable_diffusion/image_{i}.png")
395
+ ```
396
+
397
+ ### Imagic Stable Diffusion
398
+ Allows you to edit an image using stable diffusion.
399
+
400
+ ```python
401
+ import requests
402
+ from PIL import Image
403
+ from io import BytesIO
404
+ import torch
405
+ import os
406
+ from diffusers import DiffusionPipeline, DDIMScheduler
407
+ has_cuda = torch.cuda.is_available()
408
+ device = torch.device('cpu' if not has_cuda else 'cuda')
409
+ pipe = DiffusionPipeline.from_pretrained(
410
+ "CompVis/stable-diffusion-v1-4",
411
+ safety_checker=None,
412
+ use_auth_token=True,
413
+ custom_pipeline="imagic_stable_diffusion",
414
+ scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
415
+ ).to(device)
416
+ generator = torch.Generator("cuda").manual_seed(0)
417
+ seed = 0
418
+ prompt = "A photo of Barack Obama smiling with a big grin"
419
+ url = 'https://www.dropbox.com/s/6tlwzr73jd1r9yk/obama.png?dl=1'
420
+ response = requests.get(url)
421
+ init_image = Image.open(BytesIO(response.content)).convert("RGB")
422
+ init_image = init_image.resize((512, 512))
423
+ res = pipe.train(
424
+ prompt,
425
+ image=init_image,
426
+ generator=generator)
427
+ res = pipe(alpha=1, guidance_scale=7.5, num_inference_steps=50)
428
+ os.makedirs("imagic", exist_ok=True)
429
+ image = res.images[0]
430
+ image.save('./imagic/imagic_image_alpha_1.png')
431
+ res = pipe(alpha=1.5, guidance_scale=7.5, num_inference_steps=50)
432
+ image = res.images[0]
433
+ image.save('./imagic/imagic_image_alpha_1_5.png')
434
+ res = pipe(alpha=2, guidance_scale=7.5, num_inference_steps=50)
435
+ image = res.images[0]
436
+ image.save('./imagic/imagic_image_alpha_2.png')
437
+ ```
438
+
439
+ ### Seed Resizing
440
+ Test seed resizing. Originally generate an image in 512 by 512, then generate image with same seed at 512 by 592 using seed resizing. Finally, generate 512 by 592 using original stable diffusion pipeline.
441
+
442
+ ```python
443
+ import torch as th
444
+ import numpy as np
445
+ from diffusers import DiffusionPipeline
446
+
447
+ has_cuda = th.cuda.is_available()
448
+ device = th.device('cpu' if not has_cuda else 'cuda')
449
+
450
+ pipe = DiffusionPipeline.from_pretrained(
451
+ "CompVis/stable-diffusion-v1-4",
452
+ use_auth_token=True,
453
+ custom_pipeline="seed_resize_stable_diffusion"
454
+ ).to(device)
455
+
456
+ def dummy(images, **kwargs):
457
+ return images, False
458
+
459
+ pipe.safety_checker = dummy
460
+
461
+
462
+ images = []
463
+ th.manual_seed(0)
464
+ generator = th.Generator("cuda").manual_seed(0)
465
+
466
+ seed = 0
467
+ prompt = "A painting of a futuristic cop"
468
+
469
+ width = 512
470
+ height = 512
471
+
472
+ res = pipe(
473
+ prompt,
474
+ guidance_scale=7.5,
475
+ num_inference_steps=50,
476
+ height=height,
477
+ width=width,
478
+ generator=generator)
479
+ image = res.images[0]
480
+ image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
481
+
482
+
483
+ th.manual_seed(0)
484
+ generator = th.Generator("cuda").manual_seed(0)
485
+
486
+ pipe = DiffusionPipeline.from_pretrained(
487
+ "CompVis/stable-diffusion-v1-4",
488
+ use_auth_token=True,
489
+ custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
490
+ ).to(device)
491
+
492
+ width = 512
493
+ height = 592
494
+
495
+ res = pipe(
496
+ prompt,
497
+ guidance_scale=7.5,
498
+ num_inference_steps=50,
499
+ height=height,
500
+ width=width,
501
+ generator=generator)
502
+ image = res.images[0]
503
+ image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
504
+
505
+ pipe_compare = DiffusionPipeline.from_pretrained(
506
+ "CompVis/stable-diffusion-v1-4",
507
+ use_auth_token=True,
508
+ custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
509
+ ).to(device)
510
+
511
+ res = pipe_compare(
512
+ prompt,
513
+ guidance_scale=7.5,
514
+ num_inference_steps=50,
515
+ height=height,
516
+ width=width,
517
+ generator=generator
518
+ )
519
+
520
+ image = res.images[0]
521
+ image.save('./seed_resize/seed_resize_{w}_{h}_image_compare.png'.format(w=width, h=height))
522
+ ```
523
+
524
+ ### Multilingual Stable Diffusion Pipeline
525
+
526
+ The following code can generate an images from texts in different languages using the pre-trained [mBART-50 many-to-one multilingual machine translation model](https://huggingface.co/facebook/mbart-large-50-many-to-one-mmt) and Stable Diffusion.
527
+
528
+ ```python
529
+ from PIL import Image
530
+
531
+ import torch
532
+
533
+ from diffusers import DiffusionPipeline
534
+ from transformers import (
535
+ pipeline,
536
+ MBart50TokenizerFast,
537
+ MBartForConditionalGeneration,
538
+ )
539
+ device = "cuda" if torch.cuda.is_available() else "cpu"
540
+ device_dict = {"cuda": 0, "cpu": -1}
541
+
542
+ # helper function taken from: https://huggingface.co/blog/stable_diffusion
543
+ def image_grid(imgs, rows, cols):
544
+ assert len(imgs) == rows*cols
545
+
546
+ w, h = imgs[0].size
547
+ grid = Image.new('RGB', size=(cols*w, rows*h))
548
+ grid_w, grid_h = grid.size
549
+
550
+ for i, img in enumerate(imgs):
551
+ grid.paste(img, box=(i%cols*w, i//cols*h))
552
+ return grid
553
+
554
+ # Add language detection pipeline
555
+ language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection"
556
+ language_detection_pipeline = pipeline("text-classification",
557
+ model=language_detection_model_ckpt,
558
+ device=device_dict[device])
559
+
560
+ # Add model for language translation
561
+ trans_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
562
+ trans_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device)
563
+
564
+ diffuser_pipeline = DiffusionPipeline.from_pretrained(
565
+ "CompVis/stable-diffusion-v1-4",
566
+ custom_pipeline="multilingual_stable_diffusion",
567
+ detection_pipeline=language_detection_pipeline,
568
+ translation_model=trans_model,
569
+ translation_tokenizer=trans_tokenizer,
570
+
571
+ torch_dtype=torch.float16,
572
+ )
573
+
574
+ diffuser_pipeline.enable_attention_slicing()
575
+ diffuser_pipeline = diffuser_pipeline.to(device)
576
+
577
+ prompt = ["a photograph of an astronaut riding a horse",
578
+ "Una casa en la playa",
579
+ "Ein Hund, der Orange isst",
580
+ "Un restaurant parisien"]
581
+
582
+ output = diffuser_pipeline(prompt)
583
+
584
+ images = output.images
585
+
586
+ grid = image_grid(images, rows=2, cols=2)
587
+ ```
588
+
589
+ This example produces the following images:
590
+ ![image](https://user-images.githubusercontent.com/4313860/198328706-295824a4-9856-4ce5-8e66-278ceb42fd29.png)
591
+
592
+ ### Image to Image Inpainting Stable Diffusion
593
+
594
+ Similar to the standard stable diffusion inpainting example, except with the addition of an `inner_image` argument.
595
+
596
+ `image`, `inner_image`, and `mask` should have the same dimensions. `inner_image` should have an alpha (transparency) channel.
597
+
598
+ The aim is to overlay two images, then mask out the boundary between `image` and `inner_image` to allow stable diffusion to make the connection more seamless.
599
+ For example, this could be used to place a logo on a shirt and make it blend seamlessly.
600
+
601
+ ```python
602
+ import PIL
603
+ import torch
604
+
605
+ from diffusers import DiffusionPipeline
606
+
607
+ image_path = "./path-to-image.png"
608
+ inner_image_path = "./path-to-inner-image.png"
609
+ mask_path = "./path-to-mask.png"
610
+
611
+ init_image = PIL.Image.open(image_path).convert("RGB").resize((512, 512))
612
+ inner_image = PIL.Image.open(inner_image_path).convert("RGBA").resize((512, 512))
613
+ mask_image = PIL.Image.open(mask_path).convert("RGB").resize((512, 512))
614
+
615
+ pipe = DiffusionPipeline.from_pretrained(
616
+ "runwayml/stable-diffusion-inpainting",
617
+ custom_pipeline="img2img_inpainting",
618
+
619
+ torch_dtype=torch.float16
620
+ )
621
+ pipe = pipe.to("cuda")
622
+
623
+ prompt = "Your prompt here!"
624
+ image = pipe(prompt=prompt, image=init_image, inner_image=inner_image, mask_image=mask_image).images[0]
625
+ ```
626
+
627
+ ![2 by 2 grid demonstrating image to image inpainting.](https://user-images.githubusercontent.com/44398246/203506577-ec303be4-887e-4ebd-a773-c83fcb3dd01a.png)
628
+
629
+ ### Text Based Inpainting Stable Diffusion
630
+
631
+ Use a text prompt to generate the mask for the area to be inpainted.
632
+ Currently uses the CLIPSeg model for mask generation, then calls the standard Stable Diffusion Inpainting pipeline to perform the inpainting.
633
+
634
+ ```python
635
+ from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation
636
+ from diffusers import DiffusionPipeline
637
+
638
+ from PIL import Image
639
+ import requests
640
+ from torch import autocast
641
+
642
+ processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
643
+ model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined")
644
+
645
+ pipe = DiffusionPipeline.from_pretrained(
646
+ "runwayml/stable-diffusion-inpainting",
647
+ custom_pipeline="text_inpainting",
648
+ segmentation_model=model,
649
+ segmentation_processor=processor
650
+ )
651
+ pipe = pipe.to("cuda")
652
+
653
+
654
+ url = "https://github.com/timojl/clipseg/blob/master/example_image.jpg?raw=true"
655
+ image = Image.open(requests.get(url, stream=True).raw).resize((512, 512))
656
+ text = "a glass" # will mask out this text
657
+ prompt = "a cup" # the masked out region will be replaced with this
658
+
659
+ with autocast("cuda"):
660
+ image = pipe(image=image, text=text, prompt=prompt).images[0]
661
+ ```
662
+
663
+ ### Bit Diffusion
664
+ Based https://arxiv.org/abs/2208.04202, this is used for diffusion on discrete data - eg, discreate image data, DNA sequence data. An unconditional discreate image can be generated like this:
665
+
666
+ ```python
667
+ from diffusers import DiffusionPipeline
668
+ pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="bit_diffusion")
669
+ image = pipe().images[0]
670
+
671
+ ```
672
+
673
+ ### Stable Diffusion with K Diffusion
674
+
675
+ Make sure you have @crowsonkb's https://github.com/crowsonkb/k-diffusion installed:
676
+
677
+ ```
678
+ pip install k-diffusion
679
+ ```
680
+
681
+ You can use the community pipeline as follows:
682
+
683
+ ```python
684
+ from diffusers import DiffusionPipeline
685
+
686
+ pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion")
687
+ pipe = pipe.to("cuda")
688
+
689
+ prompt = "an astronaut riding a horse on mars"
690
+ pipe.set_scheduler("sample_heun")
691
+ generator = torch.Generator(device="cuda").manual_seed(seed)
692
+ image = pipe(prompt, generator=generator, num_inference_steps=20).images[0]
693
+
694
+ image.save("./astronaut_heun_k_diffusion.png")
695
+ ```
696
+
697
+ To make sure that K Diffusion and `diffusers` yield the same results:
698
+
699
+ **Diffusers**:
700
+ ```python
701
+ from diffusers import DiffusionPipeline, EulerDiscreteScheduler
702
+
703
+ seed = 33
704
+
705
+ pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
706
+ pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
707
+ pipe = pipe.to("cuda")
708
+
709
+ generator = torch.Generator(device="cuda").manual_seed(seed)
710
+ image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
711
+ ```
712
+
713
+ ![diffusers_euler](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/k_diffusion/astronaut_euler.png)
714
+
715
+ **K Diffusion**:
716
+ ```python
717
+ from diffusers import DiffusionPipeline, EulerDiscreteScheduler
718
+
719
+ seed = 33
720
+
721
+ pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion")
722
+ pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
723
+ pipe = pipe.to("cuda")
724
+
725
+ pipe.set_scheduler("sample_euler")
726
+ generator = torch.Generator(device="cuda").manual_seed(seed)
727
+ image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
728
+ ```
729
+
730
+ ![diffusers_euler](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/k_diffusion/astronaut_euler_k_diffusion.png)
731
+
732
+ ### Checkpoint Merger Pipeline
733
+ Based on the AUTOMATIC1111/webui for checkpoint merging. This is a custom pipeline that merges upto 3 pretrained model checkpoints as long as they are in the HuggingFace model_index.json format.
734
+
735
+ The checkpoint merging is currently memory intensive as it modifies the weights of a DiffusionPipeline object in place. Expect atleast 13GB RAM Usage on Kaggle GPU kernels and
736
+ on colab you might run out of the 12GB memory even while merging two checkpoints.
737
+
738
+ Usage:-
739
+ ```python
740
+ from diffusers import DiffusionPipeline
741
+
742
+ #Return a CheckpointMergerPipeline class that allows you to merge checkpoints.
743
+ #The checkpoint passed here is ignored. But still pass one of the checkpoints you plan to
744
+ #merge for convenience
745
+ pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="checkpoint_merger")
746
+
747
+ #There are multiple possible scenarios:
748
+ #The pipeline with the merged checkpoints is returned in all the scenarios
749
+
750
+ #Compatible checkpoints a.k.a matched model_index.json files. Ignores the meta attributes in model_index.json during comparision.( attrs with _ as prefix )
751
+ merged_pipe = pipe.merge(["CompVis/stable-diffusion-v1-4","CompVis/stable-diffusion-v1-2"], interp = "sigmoid", alpha = 0.4)
752
+
753
+ #Incompatible checkpoints in model_index.json but merge might be possible. Use force = True to ignore model_index.json compatibility
754
+ merged_pipe_1 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion"], force = True, interp = "sigmoid", alpha = 0.4)
755
+
756
+ #Three checkpoint merging. Only "add_difference" method actually works on all three checkpoints. Using any other options will ignore the 3rd checkpoint.
757
+ merged_pipe_2 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion","prompthero/openjourney"], force = True, interp = "add_difference", alpha = 0.4)
758
+
759
+ prompt = "An astronaut riding a horse on Mars"
760
+
761
+ image = merged_pipe(prompt).images[0]
762
+
763
+ ```
764
+ Some examples along with the merge details:
765
+
766
+ 1. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" ; Sigmoid interpolation; alpha = 0.8
767
+
768
+ ![Stable plus Waifu Sigmoid 0.8](https://huggingface.co/datasets/NagaSaiAbhinay/CheckpointMergerSamples/resolve/main/stability_v1_4_waifu_sig_0.8.png)
769
+
770
+ 2. "hakurei/waifu-diffusion" + "prompthero/openjourney" ; Inverse Sigmoid interpolation; alpha = 0.8
771
+
772
+ ![Stable plus Waifu Sigmoid 0.8](https://huggingface.co/datasets/NagaSaiAbhinay/CheckpointMergerSamples/resolve/main/waifu_openjourney_inv_sig_0.8.png)
773
+
774
+
775
+ 3. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" + "prompthero/openjourney"; Add Difference interpolation; alpha = 0.5
776
+
777
+ ![Stable plus Waifu plus openjourney add_diff 0.5](https://huggingface.co/datasets/NagaSaiAbhinay/CheckpointMergerSamples/resolve/main/stable_waifu_openjourney_add_diff_0.5.png)
778
+
779
+
780
+ ### Stable Diffusion Comparisons
781
+
782
+ This Community Pipeline enables the comparison between the 4 checkpoints that exist for Stable Diffusion. They can be found through the following links:
783
+ 1. [Stable Diffusion v1.1](https://huggingface.co/CompVis/stable-diffusion-v1-1)
784
+ 2. [Stable Diffusion v1.2](https://huggingface.co/CompVis/stable-diffusion-v1-2)
785
+ 3. [Stable Diffusion v1.3](https://huggingface.co/CompVis/stable-diffusion-v1-3)
786
+ 4. [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4)
787
+
788
+ ```python
789
+ from diffusers import DiffusionPipeline
790
+ import matplotlib.pyplot as plt
791
+
792
+ pipe = DiffusionPipeline.from_pretrained('CompVis/stable-diffusion-v1-4', custom_pipeline='suvadityamuk/StableDiffusionComparison')
793
+ pipe.enable_attention_slicing()
794
+ pipe = pipe.to('cuda')
795
+ prompt = "an astronaut riding a horse on mars"
796
+ output = pipe(prompt)
797
+
798
+ plt.subplots(2,2,1)
799
+ plt.imshow(output.images[0])
800
+ plt.title('Stable Diffusion v1.1')
801
+ plt.axis('off')
802
+ plt.subplots(2,2,2)
803
+ plt.imshow(output.images[1])
804
+ plt.title('Stable Diffusion v1.2')
805
+ plt.axis('off')
806
+ plt.subplots(2,2,3)
807
+ plt.imshow(output.images[2])
808
+ plt.title('Stable Diffusion v1.3')
809
+ plt.axis('off')
810
+ plt.subplots(2,2,4)
811
+ plt.imshow(output.images[3])
812
+ plt.title('Stable Diffusion v1.4')
813
+ plt.axis('off')
814
+
815
+ plt.show()
816
+ ```python
817
+
818
+ As a result, you can look at a grid of all 4 generated images being shown together, that captures a difference the advancement of the training between the 4 checkpoints.
v0.11.1/bit_diffusion.py ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Tuple, Union
2
+
3
+ import torch
4
+
5
+ from diffusers import DDIMScheduler, DDPMScheduler, DiffusionPipeline, UNet2DConditionModel
6
+ from diffusers.pipeline_utils import ImagePipelineOutput
7
+ from diffusers.schedulers.scheduling_ddim import DDIMSchedulerOutput
8
+ from diffusers.schedulers.scheduling_ddpm import DDPMSchedulerOutput
9
+ from einops import rearrange, reduce
10
+
11
+
12
+ BITS = 8
13
+
14
+
15
+ # convert to bit representations and back taken from https://github.com/lucidrains/bit-diffusion/blob/main/bit_diffusion/bit_diffusion.py
16
+ def decimal_to_bits(x, bits=BITS):
17
+ """expects image tensor ranging from 0 to 1, outputs bit tensor ranging from -1 to 1"""
18
+ device = x.device
19
+
20
+ x = (x * 255).int().clamp(0, 255)
21
+
22
+ mask = 2 ** torch.arange(bits - 1, -1, -1, device=device)
23
+ mask = rearrange(mask, "d -> d 1 1")
24
+ x = rearrange(x, "b c h w -> b c 1 h w")
25
+
26
+ bits = ((x & mask) != 0).float()
27
+ bits = rearrange(bits, "b c d h w -> b (c d) h w")
28
+ bits = bits * 2 - 1
29
+ return bits
30
+
31
+
32
+ def bits_to_decimal(x, bits=BITS):
33
+ """expects bits from -1 to 1, outputs image tensor from 0 to 1"""
34
+ device = x.device
35
+
36
+ x = (x > 0).int()
37
+ mask = 2 ** torch.arange(bits - 1, -1, -1, device=device, dtype=torch.int32)
38
+
39
+ mask = rearrange(mask, "d -> d 1 1")
40
+ x = rearrange(x, "b (c d) h w -> b c d h w", d=8)
41
+ dec = reduce(x * mask, "b c d h w -> b c h w", "sum")
42
+ return (dec / 255).clamp(0.0, 1.0)
43
+
44
+
45
+ # modified scheduler step functions for clamping the predicted x_0 between -bit_scale and +bit_scale
46
+ def ddim_bit_scheduler_step(
47
+ self,
48
+ model_output: torch.FloatTensor,
49
+ timestep: int,
50
+ sample: torch.FloatTensor,
51
+ eta: float = 0.0,
52
+ use_clipped_model_output: bool = True,
53
+ generator=None,
54
+ return_dict: bool = True,
55
+ ) -> Union[DDIMSchedulerOutput, Tuple]:
56
+ """
57
+ Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
58
+ process from the learned model outputs (most often the predicted noise).
59
+ Args:
60
+ model_output (`torch.FloatTensor`): direct output from learned diffusion model.
61
+ timestep (`int`): current discrete timestep in the diffusion chain.
62
+ sample (`torch.FloatTensor`):
63
+ current instance of sample being created by diffusion process.
64
+ eta (`float`): weight of noise for added noise in diffusion step.
65
+ use_clipped_model_output (`bool`): TODO
66
+ generator: random number generator.
67
+ return_dict (`bool`): option for returning tuple rather than DDIMSchedulerOutput class
68
+ Returns:
69
+ [`~schedulers.scheduling_utils.DDIMSchedulerOutput`] or `tuple`:
70
+ [`~schedulers.scheduling_utils.DDIMSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
71
+ returning a tuple, the first element is the sample tensor.
72
+ """
73
+ if self.num_inference_steps is None:
74
+ raise ValueError(
75
+ "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
76
+ )
77
+
78
+ # See formulas (12) and (16) of DDIM paper https://arxiv.org/pdf/2010.02502.pdf
79
+ # Ideally, read DDIM paper in-detail understanding
80
+
81
+ # Notation (<variable name> -> <name in paper>
82
+ # - pred_noise_t -> e_theta(x_t, t)
83
+ # - pred_original_sample -> f_theta(x_t, t) or x_0
84
+ # - std_dev_t -> sigma_t
85
+ # - eta -> η
86
+ # - pred_sample_direction -> "direction pointing to x_t"
87
+ # - pred_prev_sample -> "x_t-1"
88
+
89
+ # 1. get previous step value (=t-1)
90
+ prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps
91
+
92
+ # 2. compute alphas, betas
93
+ alpha_prod_t = self.alphas_cumprod[timestep]
94
+ alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
95
+
96
+ beta_prod_t = 1 - alpha_prod_t
97
+
98
+ # 3. compute predicted original sample from predicted noise also called
99
+ # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
100
+ pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
101
+
102
+ # 4. Clip "predicted x_0"
103
+ scale = self.bit_scale
104
+ if self.config.clip_sample:
105
+ pred_original_sample = torch.clamp(pred_original_sample, -scale, scale)
106
+
107
+ # 5. compute variance: "sigma_t(η)" -> see formula (16)
108
+ # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1)
109
+ variance = self._get_variance(timestep, prev_timestep)
110
+ std_dev_t = eta * variance ** (0.5)
111
+
112
+ if use_clipped_model_output:
113
+ # the model_output is always re-derived from the clipped x_0 in Glide
114
+ model_output = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5)
115
+
116
+ # 6. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
117
+ pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * model_output
118
+
119
+ # 7. compute x_t without "random noise" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
120
+ prev_sample = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction
121
+
122
+ if eta > 0:
123
+ # randn_like does not support generator https://github.com/pytorch/pytorch/issues/27072
124
+ device = model_output.device if torch.is_tensor(model_output) else "cpu"
125
+ noise = torch.randn(model_output.shape, dtype=model_output.dtype, generator=generator).to(device)
126
+ variance = self._get_variance(timestep, prev_timestep) ** (0.5) * eta * noise
127
+
128
+ prev_sample = prev_sample + variance
129
+
130
+ if not return_dict:
131
+ return (prev_sample,)
132
+
133
+ return DDIMSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample)
134
+
135
+
136
+ def ddpm_bit_scheduler_step(
137
+ self,
138
+ model_output: torch.FloatTensor,
139
+ timestep: int,
140
+ sample: torch.FloatTensor,
141
+ prediction_type="epsilon",
142
+ generator=None,
143
+ return_dict: bool = True,
144
+ ) -> Union[DDPMSchedulerOutput, Tuple]:
145
+ """
146
+ Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
147
+ process from the learned model outputs (most often the predicted noise).
148
+ Args:
149
+ model_output (`torch.FloatTensor`): direct output from learned diffusion model.
150
+ timestep (`int`): current discrete timestep in the diffusion chain.
151
+ sample (`torch.FloatTensor`):
152
+ current instance of sample being created by diffusion process.
153
+ prediction_type (`str`, default `epsilon`):
154
+ indicates whether the model predicts the noise (epsilon), or the samples (`sample`).
155
+ generator: random number generator.
156
+ return_dict (`bool`): option for returning tuple rather than DDPMSchedulerOutput class
157
+ Returns:
158
+ [`~schedulers.scheduling_utils.DDPMSchedulerOutput`] or `tuple`:
159
+ [`~schedulers.scheduling_utils.DDPMSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
160
+ returning a tuple, the first element is the sample tensor.
161
+ """
162
+ t = timestep
163
+
164
+ if model_output.shape[1] == sample.shape[1] * 2 and self.variance_type in ["learned", "learned_range"]:
165
+ model_output, predicted_variance = torch.split(model_output, sample.shape[1], dim=1)
166
+ else:
167
+ predicted_variance = None
168
+
169
+ # 1. compute alphas, betas
170
+ alpha_prod_t = self.alphas_cumprod[t]
171
+ alpha_prod_t_prev = self.alphas_cumprod[t - 1] if t > 0 else self.one
172
+ beta_prod_t = 1 - alpha_prod_t
173
+ beta_prod_t_prev = 1 - alpha_prod_t_prev
174
+
175
+ # 2. compute predicted original sample from predicted noise also called
176
+ # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
177
+ if prediction_type == "epsilon":
178
+ pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
179
+ elif prediction_type == "sample":
180
+ pred_original_sample = model_output
181
+ else:
182
+ raise ValueError(f"Unsupported prediction_type {prediction_type}.")
183
+
184
+ # 3. Clip "predicted x_0"
185
+ scale = self.bit_scale
186
+ if self.config.clip_sample:
187
+ pred_original_sample = torch.clamp(pred_original_sample, -scale, scale)
188
+
189
+ # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
190
+ # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
191
+ pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * self.betas[t]) / beta_prod_t
192
+ current_sample_coeff = self.alphas[t] ** (0.5) * beta_prod_t_prev / beta_prod_t
193
+
194
+ # 5. Compute predicted previous sample µ_t
195
+ # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
196
+ pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample
197
+
198
+ # 6. Add noise
199
+ variance = 0
200
+ if t > 0:
201
+ noise = torch.randn(
202
+ model_output.size(), dtype=model_output.dtype, layout=model_output.layout, generator=generator
203
+ ).to(model_output.device)
204
+ variance = (self._get_variance(t, predicted_variance=predicted_variance) ** 0.5) * noise
205
+
206
+ pred_prev_sample = pred_prev_sample + variance
207
+
208
+ if not return_dict:
209
+ return (pred_prev_sample,)
210
+
211
+ return DDPMSchedulerOutput(prev_sample=pred_prev_sample, pred_original_sample=pred_original_sample)
212
+
213
+
214
+ class BitDiffusion(DiffusionPipeline):
215
+ def __init__(
216
+ self,
217
+ unet: UNet2DConditionModel,
218
+ scheduler: Union[DDIMScheduler, DDPMScheduler],
219
+ bit_scale: Optional[float] = 1.0,
220
+ ):
221
+ super().__init__()
222
+ self.bit_scale = bit_scale
223
+ self.scheduler.step = (
224
+ ddim_bit_scheduler_step if isinstance(scheduler, DDIMScheduler) else ddpm_bit_scheduler_step
225
+ )
226
+
227
+ self.register_modules(unet=unet, scheduler=scheduler)
228
+
229
+ @torch.no_grad()
230
+ def __call__(
231
+ self,
232
+ height: Optional[int] = 256,
233
+ width: Optional[int] = 256,
234
+ num_inference_steps: Optional[int] = 50,
235
+ generator: Optional[torch.Generator] = None,
236
+ batch_size: Optional[int] = 1,
237
+ output_type: Optional[str] = "pil",
238
+ return_dict: bool = True,
239
+ **kwargs,
240
+ ) -> Union[Tuple, ImagePipelineOutput]:
241
+ latents = torch.randn(
242
+ (batch_size, self.unet.in_channels, height, width),
243
+ generator=generator,
244
+ )
245
+ latents = decimal_to_bits(latents) * self.bit_scale
246
+ latents = latents.to(self.device)
247
+
248
+ self.scheduler.set_timesteps(num_inference_steps)
249
+
250
+ for t in self.progress_bar(self.scheduler.timesteps):
251
+ # predict the noise residual
252
+ noise_pred = self.unet(latents, t).sample
253
+
254
+ # compute the previous noisy sample x_t -> x_t-1
255
+ latents = self.scheduler.step(noise_pred, t, latents).prev_sample
256
+
257
+ image = bits_to_decimal(latents)
258
+
259
+ if output_type == "pil":
260
+ image = self.numpy_to_pil(image)
261
+
262
+ if not return_dict:
263
+ return (image,)
264
+
265
+ return ImagePipelineOutput(images=image)
v0.11.1/checkpoint_merger.py ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import glob
2
+ import os
3
+ from typing import Dict, List, Union
4
+
5
+ import torch
6
+
7
+ from diffusers import DiffusionPipeline, __version__
8
+ from diffusers.pipeline_utils import (
9
+ CONFIG_NAME,
10
+ DIFFUSERS_CACHE,
11
+ ONNX_WEIGHTS_NAME,
12
+ SCHEDULER_CONFIG_NAME,
13
+ WEIGHTS_NAME,
14
+ )
15
+ from huggingface_hub import snapshot_download
16
+
17
+
18
+ class CheckpointMergerPipeline(DiffusionPipeline):
19
+ """
20
+ A class that that supports merging diffusion models based on the discussion here:
21
+ https://github.com/huggingface/diffusers/issues/877
22
+
23
+ Example usage:-
24
+
25
+ pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="checkpoint_merger.py")
26
+
27
+ merged_pipe = pipe.merge(["CompVis/stable-diffusion-v1-4","prompthero/openjourney"], interp = 'inv_sigmoid', alpha = 0.8, force = True)
28
+
29
+ merged_pipe.to('cuda')
30
+
31
+ prompt = "An astronaut riding a unicycle on Mars"
32
+
33
+ results = merged_pipe(prompt)
34
+
35
+ ## For more details, see the docstring for the merge method.
36
+
37
+ """
38
+
39
+ def __init__(self):
40
+ super().__init__()
41
+
42
+ def _compare_model_configs(self, dict0, dict1):
43
+ if dict0 == dict1:
44
+ return True
45
+ else:
46
+ config0, meta_keys0 = self._remove_meta_keys(dict0)
47
+ config1, meta_keys1 = self._remove_meta_keys(dict1)
48
+ if config0 == config1:
49
+ print(f"Warning !: Mismatch in keys {meta_keys0} and {meta_keys1}.")
50
+ return True
51
+ return False
52
+
53
+ def _remove_meta_keys(self, config_dict: Dict):
54
+ meta_keys = []
55
+ temp_dict = config_dict.copy()
56
+ for key in config_dict.keys():
57
+ if key.startswith("_"):
58
+ temp_dict.pop(key)
59
+ meta_keys.append(key)
60
+ return (temp_dict, meta_keys)
61
+
62
+ @torch.no_grad()
63
+ def merge(self, pretrained_model_name_or_path_list: List[Union[str, os.PathLike]], **kwargs):
64
+ """
65
+ Returns a new pipeline object of the class 'DiffusionPipeline' with the merged checkpoints(weights) of the models passed
66
+ in the argument 'pretrained_model_name_or_path_list' as a list.
67
+
68
+ Parameters:
69
+ -----------
70
+ pretrained_model_name_or_path_list : A list of valid pretrained model names in the HuggingFace hub or paths to locally stored models in the HuggingFace format.
71
+
72
+ **kwargs:
73
+ Supports all the default DiffusionPipeline.get_config_dict kwargs viz..
74
+
75
+ cache_dir, resume_download, force_download, proxies, local_files_only, use_auth_token, revision, torch_dtype, device_map.
76
+
77
+ alpha - The interpolation parameter. Ranges from 0 to 1. It affects the ratio in which the checkpoints are merged. A 0.8 alpha
78
+ would mean that the first model checkpoints would affect the final result far less than an alpha of 0.2
79
+
80
+ interp - The interpolation method to use for the merging. Supports "sigmoid", "inv_sigmoid", "add_difference" and None.
81
+ Passing None uses the default interpolation which is weighted sum interpolation. For merging three checkpoints, only "add_difference" is supported.
82
+
83
+ force - Whether to ignore mismatch in model_config.json for the current models. Defaults to False.
84
+
85
+ """
86
+ # Default kwargs from DiffusionPipeline
87
+ cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
88
+ resume_download = kwargs.pop("resume_download", False)
89
+ force_download = kwargs.pop("force_download", False)
90
+ proxies = kwargs.pop("proxies", None)
91
+ local_files_only = kwargs.pop("local_files_only", False)
92
+ use_auth_token = kwargs.pop("use_auth_token", None)
93
+ revision = kwargs.pop("revision", None)
94
+ torch_dtype = kwargs.pop("torch_dtype", None)
95
+ device_map = kwargs.pop("device_map", None)
96
+
97
+ alpha = kwargs.pop("alpha", 0.5)
98
+ interp = kwargs.pop("interp", None)
99
+
100
+ print("Recieved list", pretrained_model_name_or_path_list)
101
+
102
+ checkpoint_count = len(pretrained_model_name_or_path_list)
103
+ # Ignore result from model_index_json comparision of the two checkpoints
104
+ force = kwargs.pop("force", False)
105
+
106
+ # If less than 2 checkpoints, nothing to merge. If more than 3, not supported for now.
107
+ if checkpoint_count > 3 or checkpoint_count < 2:
108
+ raise ValueError(
109
+ "Received incorrect number of checkpoints to merge. Ensure that either 2 or 3 checkpoints are being"
110
+ " passed."
111
+ )
112
+
113
+ print("Received the right number of checkpoints")
114
+ # chkpt0, chkpt1 = pretrained_model_name_or_path_list[0:2]
115
+ # chkpt2 = pretrained_model_name_or_path_list[2] if checkpoint_count == 3 else None
116
+
117
+ # Validate that the checkpoints can be merged
118
+ # Step 1: Load the model config and compare the checkpoints. We'll compare the model_index.json first while ignoring the keys starting with '_'
119
+ config_dicts = []
120
+ for pretrained_model_name_or_path in pretrained_model_name_or_path_list:
121
+ if not os.path.isdir(pretrained_model_name_or_path):
122
+ config_dict = DiffusionPipeline.get_config_dict(
123
+ pretrained_model_name_or_path,
124
+ cache_dir=cache_dir,
125
+ resume_download=resume_download,
126
+ force_download=force_download,
127
+ proxies=proxies,
128
+ local_files_only=local_files_only,
129
+ use_auth_token=use_auth_token,
130
+ revision=revision,
131
+ )
132
+ config_dicts.append(config_dict)
133
+
134
+ comparison_result = True
135
+ for idx in range(1, len(config_dicts)):
136
+ comparison_result &= self._compare_model_configs(config_dicts[idx - 1], config_dicts[idx])
137
+ if not force and comparison_result is False:
138
+ raise ValueError("Incompatible checkpoints. Please check model_index.json for the models.")
139
+ print(config_dicts[0], config_dicts[1])
140
+ print("Compatible model_index.json files found")
141
+ # Step 2: Basic Validation has succeeded. Let's download the models and save them into our local files.
142
+ cached_folders = []
143
+ for pretrained_model_name_or_path, config_dict in zip(pretrained_model_name_or_path_list, config_dicts):
144
+ folder_names = [k for k in config_dict.keys() if not k.startswith("_")]
145
+ allow_patterns = [os.path.join(k, "*") for k in folder_names]
146
+ allow_patterns += [
147
+ WEIGHTS_NAME,
148
+ SCHEDULER_CONFIG_NAME,
149
+ CONFIG_NAME,
150
+ ONNX_WEIGHTS_NAME,
151
+ DiffusionPipeline.config_name,
152
+ ]
153
+ requested_pipeline_class = config_dict.get("_class_name")
154
+ user_agent = {"diffusers": __version__, "pipeline_class": requested_pipeline_class}
155
+
156
+ cached_folder = snapshot_download(
157
+ pretrained_model_name_or_path,
158
+ cache_dir=cache_dir,
159
+ resume_download=resume_download,
160
+ proxies=proxies,
161
+ local_files_only=local_files_only,
162
+ use_auth_token=use_auth_token,
163
+ revision=revision,
164
+ allow_patterns=allow_patterns,
165
+ user_agent=user_agent,
166
+ )
167
+ print("Cached Folder", cached_folder)
168
+ cached_folders.append(cached_folder)
169
+
170
+ # Step 3:-
171
+ # Load the first checkpoint as a diffusion pipeline and modify it's module state_dict in place
172
+ final_pipe = DiffusionPipeline.from_pretrained(
173
+ cached_folders[0], torch_dtype=torch_dtype, device_map=device_map
174
+ )
175
+
176
+ checkpoint_path_2 = None
177
+ if len(cached_folders) > 2:
178
+ checkpoint_path_2 = os.path.join(cached_folders[2])
179
+
180
+ if interp == "sigmoid":
181
+ theta_func = CheckpointMergerPipeline.sigmoid
182
+ elif interp == "inv_sigmoid":
183
+ theta_func = CheckpointMergerPipeline.inv_sigmoid
184
+ elif interp == "add_diff":
185
+ theta_func = CheckpointMergerPipeline.add_difference
186
+ else:
187
+ theta_func = CheckpointMergerPipeline.weighted_sum
188
+
189
+ # Find each module's state dict.
190
+ for attr in final_pipe.config.keys():
191
+ if not attr.startswith("_"):
192
+ checkpoint_path_1 = os.path.join(cached_folders[1], attr)
193
+ if os.path.exists(checkpoint_path_1):
194
+ files = glob.glob(os.path.join(checkpoint_path_1, "*.bin"))
195
+ checkpoint_path_1 = files[0] if len(files) > 0 else None
196
+ if checkpoint_path_2 is not None and os.path.exists(checkpoint_path_2):
197
+ files = glob.glob(os.path.join(checkpoint_path_2, "*.bin"))
198
+ checkpoint_path_2 = files[0] if len(files) > 0 else None
199
+ # For an attr if both checkpoint_path_1 and 2 are None, ignore.
200
+ # If atleast one is present, deal with it according to interp method, of course only if the state_dict keys match.
201
+ if checkpoint_path_1 is None and checkpoint_path_2 is None:
202
+ print("SKIPPING ATTR ", attr)
203
+ continue
204
+ try:
205
+ module = getattr(final_pipe, attr)
206
+ theta_0 = getattr(module, "state_dict")
207
+ theta_0 = theta_0()
208
+
209
+ update_theta_0 = getattr(module, "load_state_dict")
210
+ theta_1 = torch.load(checkpoint_path_1)
211
+
212
+ theta_2 = torch.load(checkpoint_path_2) if checkpoint_path_2 else None
213
+
214
+ if not theta_0.keys() == theta_1.keys():
215
+ print("SKIPPING ATTR ", attr, " DUE TO MISMATCH")
216
+ continue
217
+ if theta_2 and not theta_1.keys() == theta_2.keys():
218
+ print("SKIPPING ATTR ", attr, " DUE TO MISMATCH")
219
+ except:
220
+ print("SKIPPING ATTR ", attr)
221
+ continue
222
+ print("Found dicts for")
223
+ print(attr)
224
+ print(checkpoint_path_1)
225
+ print(checkpoint_path_2)
226
+
227
+ for key in theta_0.keys():
228
+ if theta_2:
229
+ theta_0[key] = theta_func(theta_0[key], theta_1[key], theta_2[key], alpha)
230
+ else:
231
+ theta_0[key] = theta_func(theta_0[key], theta_1[key], None, alpha)
232
+
233
+ del theta_1
234
+ del theta_2
235
+ update_theta_0(theta_0)
236
+
237
+ del theta_0
238
+ print("Diffusion pipeline successfully updated with merged weights")
239
+
240
+ return final_pipe
241
+
242
+ @staticmethod
243
+ def weighted_sum(theta0, theta1, theta2, alpha):
244
+ return ((1 - alpha) * theta0) + (alpha * theta1)
245
+
246
+ # Smoothstep (https://en.wikipedia.org/wiki/Smoothstep)
247
+ @staticmethod
248
+ def sigmoid(theta0, theta1, theta2, alpha):
249
+ alpha = alpha * alpha * (3 - (2 * alpha))
250
+ return theta0 + ((theta1 - theta0) * alpha)
251
+
252
+ # Inverse Smoothstep (https://en.wikipedia.org/wiki/Smoothstep)
253
+ @staticmethod
254
+ def inv_sigmoid(theta0, theta1, theta2, alpha):
255
+ import math
256
+
257
+ alpha = 0.5 - math.sin(math.asin(1.0 - 2.0 * alpha) / 3.0)
258
+ return theta0 + ((theta1 - theta0) * alpha)
259
+
260
+ @staticmethod
261
+ def add_difference(theta0, theta1, theta2, alpha):
262
+ return theta0 + (theta1 - theta2) * (1.0 - alpha)
v0.11.1/clip_guided_stable_diffusion.py ADDED
@@ -0,0 +1,351 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import List, Optional, Union
3
+
4
+ import torch
5
+ from torch import nn
6
+ from torch.nn import functional as F
7
+
8
+ from diffusers import (
9
+ AutoencoderKL,
10
+ DDIMScheduler,
11
+ DiffusionPipeline,
12
+ LMSDiscreteScheduler,
13
+ PNDMScheduler,
14
+ UNet2DConditionModel,
15
+ )
16
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
17
+ from torchvision import transforms
18
+ from transformers import CLIPFeatureExtractor, CLIPModel, CLIPTextModel, CLIPTokenizer
19
+
20
+
21
+ class MakeCutouts(nn.Module):
22
+ def __init__(self, cut_size, cut_power=1.0):
23
+ super().__init__()
24
+
25
+ self.cut_size = cut_size
26
+ self.cut_power = cut_power
27
+
28
+ def forward(self, pixel_values, num_cutouts):
29
+ sideY, sideX = pixel_values.shape[2:4]
30
+ max_size = min(sideX, sideY)
31
+ min_size = min(sideX, sideY, self.cut_size)
32
+ cutouts = []
33
+ for _ in range(num_cutouts):
34
+ size = int(torch.rand([]) ** self.cut_power * (max_size - min_size) + min_size)
35
+ offsetx = torch.randint(0, sideX - size + 1, ())
36
+ offsety = torch.randint(0, sideY - size + 1, ())
37
+ cutout = pixel_values[:, :, offsety : offsety + size, offsetx : offsetx + size]
38
+ cutouts.append(F.adaptive_avg_pool2d(cutout, self.cut_size))
39
+ return torch.cat(cutouts)
40
+
41
+
42
+ def spherical_dist_loss(x, y):
43
+ x = F.normalize(x, dim=-1)
44
+ y = F.normalize(y, dim=-1)
45
+ return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
46
+
47
+
48
+ def set_requires_grad(model, value):
49
+ for param in model.parameters():
50
+ param.requires_grad = value
51
+
52
+
53
+ class CLIPGuidedStableDiffusion(DiffusionPipeline):
54
+ """CLIP guided stable diffusion based on the amazing repo by @crowsonkb and @Jack000
55
+ - https://github.com/Jack000/glid-3-xl
56
+ - https://github.dev/crowsonkb/k-diffusion
57
+ """
58
+
59
+ def __init__(
60
+ self,
61
+ vae: AutoencoderKL,
62
+ text_encoder: CLIPTextModel,
63
+ clip_model: CLIPModel,
64
+ tokenizer: CLIPTokenizer,
65
+ unet: UNet2DConditionModel,
66
+ scheduler: Union[PNDMScheduler, LMSDiscreteScheduler, DDIMScheduler],
67
+ feature_extractor: CLIPFeatureExtractor,
68
+ ):
69
+ super().__init__()
70
+ self.register_modules(
71
+ vae=vae,
72
+ text_encoder=text_encoder,
73
+ clip_model=clip_model,
74
+ tokenizer=tokenizer,
75
+ unet=unet,
76
+ scheduler=scheduler,
77
+ feature_extractor=feature_extractor,
78
+ )
79
+
80
+ self.normalize = transforms.Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std)
81
+ cut_out_size = (
82
+ feature_extractor.size
83
+ if isinstance(feature_extractor.size, int)
84
+ else feature_extractor.size["shortest_edge"]
85
+ )
86
+ self.make_cutouts = MakeCutouts(cut_out_size)
87
+
88
+ set_requires_grad(self.text_encoder, False)
89
+ set_requires_grad(self.clip_model, False)
90
+
91
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
92
+ if slice_size == "auto":
93
+ # half the attention head size is usually a good trade-off between
94
+ # speed and memory
95
+ slice_size = self.unet.config.attention_head_dim // 2
96
+ self.unet.set_attention_slice(slice_size)
97
+
98
+ def disable_attention_slicing(self):
99
+ self.enable_attention_slicing(None)
100
+
101
+ def freeze_vae(self):
102
+ set_requires_grad(self.vae, False)
103
+
104
+ def unfreeze_vae(self):
105
+ set_requires_grad(self.vae, True)
106
+
107
+ def freeze_unet(self):
108
+ set_requires_grad(self.unet, False)
109
+
110
+ def unfreeze_unet(self):
111
+ set_requires_grad(self.unet, True)
112
+
113
+ @torch.enable_grad()
114
+ def cond_fn(
115
+ self,
116
+ latents,
117
+ timestep,
118
+ index,
119
+ text_embeddings,
120
+ noise_pred_original,
121
+ text_embeddings_clip,
122
+ clip_guidance_scale,
123
+ num_cutouts,
124
+ use_cutouts=True,
125
+ ):
126
+ latents = latents.detach().requires_grad_()
127
+
128
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
129
+ sigma = self.scheduler.sigmas[index]
130
+ # the model input needs to be scaled to match the continuous ODE formulation in K-LMS
131
+ latent_model_input = latents / ((sigma**2 + 1) ** 0.5)
132
+ else:
133
+ latent_model_input = latents
134
+
135
+ # predict the noise residual
136
+ noise_pred = self.unet(latent_model_input, timestep, encoder_hidden_states=text_embeddings).sample
137
+
138
+ if isinstance(self.scheduler, (PNDMScheduler, DDIMScheduler)):
139
+ alpha_prod_t = self.scheduler.alphas_cumprod[timestep]
140
+ beta_prod_t = 1 - alpha_prod_t
141
+ # compute predicted original sample from predicted noise also called
142
+ # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
143
+ pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5)
144
+
145
+ fac = torch.sqrt(beta_prod_t)
146
+ sample = pred_original_sample * (fac) + latents * (1 - fac)
147
+ elif isinstance(self.scheduler, LMSDiscreteScheduler):
148
+ sigma = self.scheduler.sigmas[index]
149
+ sample = latents - sigma * noise_pred
150
+ else:
151
+ raise ValueError(f"scheduler type {type(self.scheduler)} not supported")
152
+
153
+ sample = 1 / 0.18215 * sample
154
+ image = self.vae.decode(sample).sample
155
+ image = (image / 2 + 0.5).clamp(0, 1)
156
+
157
+ if use_cutouts:
158
+ image = self.make_cutouts(image, num_cutouts)
159
+ else:
160
+ image = transforms.Resize(self.feature_extractor.size)(image)
161
+ image = self.normalize(image).to(latents.dtype)
162
+
163
+ image_embeddings_clip = self.clip_model.get_image_features(image)
164
+ image_embeddings_clip = image_embeddings_clip / image_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
165
+
166
+ if use_cutouts:
167
+ dists = spherical_dist_loss(image_embeddings_clip, text_embeddings_clip)
168
+ dists = dists.view([num_cutouts, sample.shape[0], -1])
169
+ loss = dists.sum(2).mean(0).sum() * clip_guidance_scale
170
+ else:
171
+ loss = spherical_dist_loss(image_embeddings_clip, text_embeddings_clip).mean() * clip_guidance_scale
172
+
173
+ grads = -torch.autograd.grad(loss, latents)[0]
174
+
175
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
176
+ latents = latents.detach() + grads * (sigma**2)
177
+ noise_pred = noise_pred_original
178
+ else:
179
+ noise_pred = noise_pred_original - torch.sqrt(beta_prod_t) * grads
180
+ return noise_pred, latents
181
+
182
+ @torch.no_grad()
183
+ def __call__(
184
+ self,
185
+ prompt: Union[str, List[str]],
186
+ height: Optional[int] = 512,
187
+ width: Optional[int] = 512,
188
+ num_inference_steps: Optional[int] = 50,
189
+ guidance_scale: Optional[float] = 7.5,
190
+ num_images_per_prompt: Optional[int] = 1,
191
+ eta: float = 0.0,
192
+ clip_guidance_scale: Optional[float] = 100,
193
+ clip_prompt: Optional[Union[str, List[str]]] = None,
194
+ num_cutouts: Optional[int] = 4,
195
+ use_cutouts: Optional[bool] = True,
196
+ generator: Optional[torch.Generator] = None,
197
+ latents: Optional[torch.FloatTensor] = None,
198
+ output_type: Optional[str] = "pil",
199
+ return_dict: bool = True,
200
+ ):
201
+ if isinstance(prompt, str):
202
+ batch_size = 1
203
+ elif isinstance(prompt, list):
204
+ batch_size = len(prompt)
205
+ else:
206
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
207
+
208
+ if height % 8 != 0 or width % 8 != 0:
209
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
210
+
211
+ # get prompt text embeddings
212
+ text_input = self.tokenizer(
213
+ prompt,
214
+ padding="max_length",
215
+ max_length=self.tokenizer.model_max_length,
216
+ truncation=True,
217
+ return_tensors="pt",
218
+ )
219
+ text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0]
220
+ # duplicate text embeddings for each generation per prompt
221
+ text_embeddings = text_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
222
+
223
+ if clip_guidance_scale > 0:
224
+ if clip_prompt is not None:
225
+ clip_text_input = self.tokenizer(
226
+ clip_prompt,
227
+ padding="max_length",
228
+ max_length=self.tokenizer.model_max_length,
229
+ truncation=True,
230
+ return_tensors="pt",
231
+ ).input_ids.to(self.device)
232
+ else:
233
+ clip_text_input = text_input.input_ids.to(self.device)
234
+ text_embeddings_clip = self.clip_model.get_text_features(clip_text_input)
235
+ text_embeddings_clip = text_embeddings_clip / text_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
236
+ # duplicate text embeddings clip for each generation per prompt
237
+ text_embeddings_clip = text_embeddings_clip.repeat_interleave(num_images_per_prompt, dim=0)
238
+
239
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
240
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
241
+ # corresponds to doing no classifier free guidance.
242
+ do_classifier_free_guidance = guidance_scale > 1.0
243
+ # get unconditional embeddings for classifier free guidance
244
+ if do_classifier_free_guidance:
245
+ max_length = text_input.input_ids.shape[-1]
246
+ uncond_input = self.tokenizer([""], padding="max_length", max_length=max_length, return_tensors="pt")
247
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
248
+ # duplicate unconditional embeddings for each generation per prompt
249
+ uncond_embeddings = uncond_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
250
+
251
+ # For classifier free guidance, we need to do two forward passes.
252
+ # Here we concatenate the unconditional and text embeddings into a single batch
253
+ # to avoid doing two forward passes
254
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
255
+
256
+ # get the initial random noise unless the user supplied it
257
+
258
+ # Unlike in other pipelines, latents need to be generated in the target device
259
+ # for 1-to-1 results reproducibility with the CompVis implementation.
260
+ # However this currently doesn't work in `mps`.
261
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
262
+ latents_dtype = text_embeddings.dtype
263
+ if latents is None:
264
+ if self.device.type == "mps":
265
+ # randn does not work reproducibly on mps
266
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
267
+ self.device
268
+ )
269
+ else:
270
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
271
+ else:
272
+ if latents.shape != latents_shape:
273
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
274
+ latents = latents.to(self.device)
275
+
276
+ # set timesteps
277
+ accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
278
+ extra_set_kwargs = {}
279
+ if accepts_offset:
280
+ extra_set_kwargs["offset"] = 1
281
+
282
+ self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
283
+
284
+ # Some schedulers like PNDM have timesteps as arrays
285
+ # It's more optimized to move all timesteps to correct device beforehand
286
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
287
+
288
+ # scale the initial noise by the standard deviation required by the scheduler
289
+ latents = latents * self.scheduler.init_noise_sigma
290
+
291
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
292
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
293
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
294
+ # and should be between [0, 1]
295
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
296
+ extra_step_kwargs = {}
297
+ if accepts_eta:
298
+ extra_step_kwargs["eta"] = eta
299
+
300
+ # check if the scheduler accepts generator
301
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
302
+ if accepts_generator:
303
+ extra_step_kwargs["generator"] = generator
304
+
305
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
306
+ # expand the latents if we are doing classifier free guidance
307
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
308
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
309
+
310
+ # predict the noise residual
311
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
312
+
313
+ # perform classifier free guidance
314
+ if do_classifier_free_guidance:
315
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
316
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
317
+
318
+ # perform clip guidance
319
+ if clip_guidance_scale > 0:
320
+ text_embeddings_for_guidance = (
321
+ text_embeddings.chunk(2)[1] if do_classifier_free_guidance else text_embeddings
322
+ )
323
+ noise_pred, latents = self.cond_fn(
324
+ latents,
325
+ t,
326
+ i,
327
+ text_embeddings_for_guidance,
328
+ noise_pred,
329
+ text_embeddings_clip,
330
+ clip_guidance_scale,
331
+ num_cutouts,
332
+ use_cutouts,
333
+ )
334
+
335
+ # compute the previous noisy sample x_t -> x_t-1
336
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
337
+
338
+ # scale and decode the image latents with vae
339
+ latents = 1 / 0.18215 * latents
340
+ image = self.vae.decode(latents).sample
341
+
342
+ image = (image / 2 + 0.5).clamp(0, 1)
343
+ image = image.cpu().permute(0, 2, 3, 1).numpy()
344
+
345
+ if output_type == "pil":
346
+ image = self.numpy_to_pil(image)
347
+
348
+ if not return_dict:
349
+ return (image, None)
350
+
351
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None)
v0.11.1/composable_stable_diffusion.py ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ modified based on diffusion library from Huggingface: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
3
+ """
4
+ import inspect
5
+ import warnings
6
+ from typing import List, Optional, Union
7
+
8
+ import torch
9
+
10
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
11
+ from diffusers.pipeline_utils import DiffusionPipeline
12
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
13
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
14
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
15
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
16
+
17
+
18
+ class ComposableStableDiffusionPipeline(DiffusionPipeline):
19
+ r"""
20
+ Pipeline for text-to-image generation using Stable Diffusion.
21
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
22
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
23
+ Args:
24
+ vae ([`AutoencoderKL`]):
25
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
26
+ text_encoder ([`CLIPTextModel`]):
27
+ Frozen text-encoder. Stable Diffusion uses the text portion of
28
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
29
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
30
+ tokenizer (`CLIPTokenizer`):
31
+ Tokenizer of class
32
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
33
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
34
+ scheduler ([`SchedulerMixin`]):
35
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
36
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
37
+ safety_checker ([`StableDiffusionSafetyChecker`]):
38
+ Classification module that estimates whether generated images could be considered offsensive or harmful.
39
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
40
+ feature_extractor ([`CLIPFeatureExtractor`]):
41
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
42
+ """
43
+
44
+ def __init__(
45
+ self,
46
+ vae: AutoencoderKL,
47
+ text_encoder: CLIPTextModel,
48
+ tokenizer: CLIPTokenizer,
49
+ unet: UNet2DConditionModel,
50
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
51
+ safety_checker: StableDiffusionSafetyChecker,
52
+ feature_extractor: CLIPFeatureExtractor,
53
+ ):
54
+ super().__init__()
55
+ self.register_modules(
56
+ vae=vae,
57
+ text_encoder=text_encoder,
58
+ tokenizer=tokenizer,
59
+ unet=unet,
60
+ scheduler=scheduler,
61
+ safety_checker=safety_checker,
62
+ feature_extractor=feature_extractor,
63
+ )
64
+
65
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
66
+ r"""
67
+ Enable sliced attention computation.
68
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
69
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
70
+ Args:
71
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
72
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
73
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
74
+ `attention_head_dim` must be a multiple of `slice_size`.
75
+ """
76
+ if slice_size == "auto":
77
+ # half the attention head size is usually a good trade-off between
78
+ # speed and memory
79
+ slice_size = self.unet.config.attention_head_dim // 2
80
+ self.unet.set_attention_slice(slice_size)
81
+
82
+ def disable_attention_slicing(self):
83
+ r"""
84
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
85
+ back to computing attention in one step.
86
+ """
87
+ # set slice_size = `None` to disable `attention slicing`
88
+ self.enable_attention_slicing(None)
89
+
90
+ @torch.no_grad()
91
+ def __call__(
92
+ self,
93
+ prompt: Union[str, List[str]],
94
+ height: Optional[int] = 512,
95
+ width: Optional[int] = 512,
96
+ num_inference_steps: Optional[int] = 50,
97
+ guidance_scale: Optional[float] = 7.5,
98
+ eta: Optional[float] = 0.0,
99
+ generator: Optional[torch.Generator] = None,
100
+ latents: Optional[torch.FloatTensor] = None,
101
+ output_type: Optional[str] = "pil",
102
+ return_dict: bool = True,
103
+ weights: Optional[str] = "",
104
+ **kwargs,
105
+ ):
106
+ r"""
107
+ Function invoked when calling the pipeline for generation.
108
+ Args:
109
+ prompt (`str` or `List[str]`):
110
+ The prompt or prompts to guide the image generation.
111
+ height (`int`, *optional*, defaults to 512):
112
+ The height in pixels of the generated image.
113
+ width (`int`, *optional*, defaults to 512):
114
+ The width in pixels of the generated image.
115
+ num_inference_steps (`int`, *optional*, defaults to 50):
116
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
117
+ expense of slower inference.
118
+ guidance_scale (`float`, *optional*, defaults to 7.5):
119
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
120
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
121
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
122
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
123
+ usually at the expense of lower image quality.
124
+ eta (`float`, *optional*, defaults to 0.0):
125
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
126
+ [`schedulers.DDIMScheduler`], will be ignored for others.
127
+ generator (`torch.Generator`, *optional*):
128
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
129
+ deterministic.
130
+ latents (`torch.FloatTensor`, *optional*):
131
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
132
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
133
+ tensor will ge generated by sampling using the supplied random `generator`.
134
+ output_type (`str`, *optional*, defaults to `"pil"`):
135
+ The output format of the generate image. Choose between
136
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
137
+ return_dict (`bool`, *optional*, defaults to `True`):
138
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
139
+ plain tuple.
140
+ Returns:
141
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
142
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
143
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
144
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
145
+ (nsfw) content, according to the `safety_checker`.
146
+ """
147
+
148
+ if "torch_device" in kwargs:
149
+ device = kwargs.pop("torch_device")
150
+ warnings.warn(
151
+ "`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0."
152
+ " Consider using `pipe.to(torch_device)` instead."
153
+ )
154
+
155
+ # Set device as before (to be removed in 0.3.0)
156
+ if device is None:
157
+ device = "cuda" if torch.cuda.is_available() else "cpu"
158
+ self.to(device)
159
+
160
+ if isinstance(prompt, str):
161
+ batch_size = 1
162
+ elif isinstance(prompt, list):
163
+ batch_size = len(prompt)
164
+ else:
165
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
166
+
167
+ if height % 8 != 0 or width % 8 != 0:
168
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
169
+
170
+ if "|" in prompt:
171
+ prompt = [x.strip() for x in prompt.split("|")]
172
+ print(f"composing {prompt}...")
173
+
174
+ # get prompt text embeddings
175
+ text_input = self.tokenizer(
176
+ prompt,
177
+ padding="max_length",
178
+ max_length=self.tokenizer.model_max_length,
179
+ truncation=True,
180
+ return_tensors="pt",
181
+ )
182
+ text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0]
183
+
184
+ if not weights:
185
+ # specify weights for prompts (excluding the unconditional score)
186
+ print("using equal weights for all prompts...")
187
+ pos_weights = torch.tensor(
188
+ [1 / (text_embeddings.shape[0] - 1)] * (text_embeddings.shape[0] - 1), device=self.device
189
+ ).reshape(-1, 1, 1, 1)
190
+ neg_weights = torch.tensor([1.0], device=self.device).reshape(-1, 1, 1, 1)
191
+ mask = torch.tensor([False] + [True] * pos_weights.shape[0], dtype=torch.bool)
192
+ else:
193
+ # set prompt weight for each
194
+ num_prompts = len(prompt) if isinstance(prompt, list) else 1
195
+ weights = [float(w.strip()) for w in weights.split("|")]
196
+ if len(weights) < num_prompts:
197
+ weights.append(1.0)
198
+ weights = torch.tensor(weights, device=self.device)
199
+ assert len(weights) == text_embeddings.shape[0], "weights specified are not equal to the number of prompts"
200
+ pos_weights = []
201
+ neg_weights = []
202
+ mask = [] # first one is unconditional score
203
+ for w in weights:
204
+ if w > 0:
205
+ pos_weights.append(w)
206
+ mask.append(True)
207
+ else:
208
+ neg_weights.append(abs(w))
209
+ mask.append(False)
210
+ # normalize the weights
211
+ pos_weights = torch.tensor(pos_weights, device=self.device).reshape(-1, 1, 1, 1)
212
+ pos_weights = pos_weights / pos_weights.sum()
213
+ neg_weights = torch.tensor(neg_weights, device=self.device).reshape(-1, 1, 1, 1)
214
+ neg_weights = neg_weights / neg_weights.sum()
215
+ mask = torch.tensor(mask, device=self.device, dtype=torch.bool)
216
+
217
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
218
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
219
+ # corresponds to doing no classifier free guidance.
220
+ do_classifier_free_guidance = guidance_scale > 1.0
221
+ # get unconditional embeddings for classifier free guidance
222
+ if do_classifier_free_guidance:
223
+ max_length = text_input.input_ids.shape[-1]
224
+
225
+ if torch.all(mask):
226
+ # no negative prompts, so we use empty string as the negative prompt
227
+ uncond_input = self.tokenizer(
228
+ [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt"
229
+ )
230
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
231
+
232
+ # For classifier free guidance, we need to do two forward passes.
233
+ # Here we concatenate the unconditional and text embeddings into a single batch
234
+ # to avoid doing two forward passes
235
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
236
+
237
+ # update negative weights
238
+ neg_weights = torch.tensor([1.0], device=self.device)
239
+ mask = torch.tensor([False] + mask.detach().tolist(), device=self.device, dtype=torch.bool)
240
+
241
+ # get the initial random noise unless the user supplied it
242
+
243
+ # Unlike in other pipelines, latents need to be generated in the target device
244
+ # for 1-to-1 results reproducibility with the CompVis implementation.
245
+ # However this currently doesn't work in `mps`.
246
+ latents_device = "cpu" if self.device.type == "mps" else self.device
247
+ latents_shape = (batch_size, self.unet.in_channels, height // 8, width // 8)
248
+ if latents is None:
249
+ latents = torch.randn(
250
+ latents_shape,
251
+ generator=generator,
252
+ device=latents_device,
253
+ )
254
+ else:
255
+ if latents.shape != latents_shape:
256
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
257
+ latents = latents.to(self.device)
258
+
259
+ # set timesteps
260
+ accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
261
+ extra_set_kwargs = {}
262
+ if accepts_offset:
263
+ extra_set_kwargs["offset"] = 1
264
+
265
+ self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
266
+
267
+ # if we use LMSDiscreteScheduler, let's make sure latents are multiplied by sigmas
268
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
269
+ latents = latents * self.scheduler.sigmas[0]
270
+
271
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
272
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
273
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
274
+ # and should be between [0, 1]
275
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
276
+ extra_step_kwargs = {}
277
+ if accepts_eta:
278
+ extra_step_kwargs["eta"] = eta
279
+
280
+ for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)):
281
+ # expand the latents if we are doing classifier free guidance
282
+ latent_model_input = (
283
+ torch.cat([latents] * text_embeddings.shape[0]) if do_classifier_free_guidance else latents
284
+ )
285
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
286
+ sigma = self.scheduler.sigmas[i]
287
+ # the model input needs to be scaled to match the continuous ODE formulation in K-LMS
288
+ latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5)
289
+
290
+ # reduce memory by predicting each score sequentially
291
+ noise_preds = []
292
+ # predict the noise residual
293
+ for latent_in, text_embedding_in in zip(
294
+ torch.chunk(latent_model_input, chunks=latent_model_input.shape[0], dim=0),
295
+ torch.chunk(text_embeddings, chunks=text_embeddings.shape[0], dim=0),
296
+ ):
297
+ noise_preds.append(self.unet(latent_in, t, encoder_hidden_states=text_embedding_in).sample)
298
+ noise_preds = torch.cat(noise_preds, dim=0)
299
+
300
+ # perform guidance
301
+ if do_classifier_free_guidance:
302
+ noise_pred_uncond = (noise_preds[~mask] * neg_weights).sum(dim=0, keepdims=True)
303
+ noise_pred_text = (noise_preds[mask] * pos_weights).sum(dim=0, keepdims=True)
304
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
305
+
306
+ # compute the previous noisy sample x_t -> x_t-1
307
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
308
+ latents = self.scheduler.step(noise_pred, i, latents, **extra_step_kwargs).prev_sample
309
+ else:
310
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
311
+
312
+ # scale and decode the image latents with vae
313
+ latents = 1 / 0.18215 * latents
314
+ image = self.vae.decode(latents).sample
315
+
316
+ image = (image / 2 + 0.5).clamp(0, 1)
317
+ image = image.cpu().permute(0, 2, 3, 1).numpy()
318
+
319
+ # run safety checker
320
+ safety_cheker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device)
321
+ image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values)
322
+
323
+ if output_type == "pil":
324
+ image = self.numpy_to_pil(image)
325
+
326
+ if not return_dict:
327
+ return (image, has_nsfw_concept)
328
+
329
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.11.1/imagic_stable_diffusion.py ADDED
@@ -0,0 +1,501 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ modeled after the textual_inversion.py / train_dreambooth.py and the work
3
+ of justinpinkney here: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb
4
+ """
5
+ import inspect
6
+ import warnings
7
+ from typing import List, Optional, Union
8
+
9
+ import numpy as np
10
+ import torch
11
+ import torch.nn.functional as F
12
+
13
+ import PIL
14
+ from accelerate import Accelerator
15
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
16
+ from diffusers.pipeline_utils import DiffusionPipeline
17
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
18
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
19
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
20
+ from diffusers.utils import deprecate, logging
21
+
22
+ # TODO: remove and import from diffusers.utils when the new version of diffusers is released
23
+ from packaging import version
24
+ from tqdm.auto import tqdm
25
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
26
+
27
+
28
+ if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
29
+ PIL_INTERPOLATION = {
30
+ "linear": PIL.Image.Resampling.BILINEAR,
31
+ "bilinear": PIL.Image.Resampling.BILINEAR,
32
+ "bicubic": PIL.Image.Resampling.BICUBIC,
33
+ "lanczos": PIL.Image.Resampling.LANCZOS,
34
+ "nearest": PIL.Image.Resampling.NEAREST,
35
+ }
36
+ else:
37
+ PIL_INTERPOLATION = {
38
+ "linear": PIL.Image.LINEAR,
39
+ "bilinear": PIL.Image.BILINEAR,
40
+ "bicubic": PIL.Image.BICUBIC,
41
+ "lanczos": PIL.Image.LANCZOS,
42
+ "nearest": PIL.Image.NEAREST,
43
+ }
44
+ # ------------------------------------------------------------------------------
45
+
46
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
47
+
48
+
49
+ def preprocess(image):
50
+ w, h = image.size
51
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
52
+ image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
53
+ image = np.array(image).astype(np.float32) / 255.0
54
+ image = image[None].transpose(0, 3, 1, 2)
55
+ image = torch.from_numpy(image)
56
+ return 2.0 * image - 1.0
57
+
58
+
59
+ class ImagicStableDiffusionPipeline(DiffusionPipeline):
60
+ r"""
61
+ Pipeline for imagic image editing.
62
+ See paper here: https://arxiv.org/pdf/2210.09276.pdf
63
+
64
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
65
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
66
+ Args:
67
+ vae ([`AutoencoderKL`]):
68
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
69
+ text_encoder ([`CLIPTextModel`]):
70
+ Frozen text-encoder. Stable Diffusion uses the text portion of
71
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
72
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
73
+ tokenizer (`CLIPTokenizer`):
74
+ Tokenizer of class
75
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
76
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
77
+ scheduler ([`SchedulerMixin`]):
78
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
79
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
80
+ safety_checker ([`StableDiffusionSafetyChecker`]):
81
+ Classification module that estimates whether generated images could be considered offsensive or harmful.
82
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
83
+ feature_extractor ([`CLIPFeatureExtractor`]):
84
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
85
+ """
86
+
87
+ def __init__(
88
+ self,
89
+ vae: AutoencoderKL,
90
+ text_encoder: CLIPTextModel,
91
+ tokenizer: CLIPTokenizer,
92
+ unet: UNet2DConditionModel,
93
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
94
+ safety_checker: StableDiffusionSafetyChecker,
95
+ feature_extractor: CLIPFeatureExtractor,
96
+ ):
97
+ super().__init__()
98
+ self.register_modules(
99
+ vae=vae,
100
+ text_encoder=text_encoder,
101
+ tokenizer=tokenizer,
102
+ unet=unet,
103
+ scheduler=scheduler,
104
+ safety_checker=safety_checker,
105
+ feature_extractor=feature_extractor,
106
+ )
107
+
108
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
109
+ r"""
110
+ Enable sliced attention computation.
111
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
112
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
113
+ Args:
114
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
115
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
116
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
117
+ `attention_head_dim` must be a multiple of `slice_size`.
118
+ """
119
+ if slice_size == "auto":
120
+ # half the attention head size is usually a good trade-off between
121
+ # speed and memory
122
+ slice_size = self.unet.config.attention_head_dim // 2
123
+ self.unet.set_attention_slice(slice_size)
124
+
125
+ def disable_attention_slicing(self):
126
+ r"""
127
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
128
+ back to computing attention in one step.
129
+ """
130
+ # set slice_size = `None` to disable `attention slicing`
131
+ self.enable_attention_slicing(None)
132
+
133
+ def train(
134
+ self,
135
+ prompt: Union[str, List[str]],
136
+ image: Union[torch.FloatTensor, PIL.Image.Image],
137
+ height: Optional[int] = 512,
138
+ width: Optional[int] = 512,
139
+ generator: Optional[torch.Generator] = None,
140
+ embedding_learning_rate: float = 0.001,
141
+ diffusion_model_learning_rate: float = 2e-6,
142
+ text_embedding_optimization_steps: int = 500,
143
+ model_fine_tuning_optimization_steps: int = 1000,
144
+ **kwargs,
145
+ ):
146
+ r"""
147
+ Function invoked when calling the pipeline for generation.
148
+ Args:
149
+ prompt (`str` or `List[str]`):
150
+ The prompt or prompts to guide the image generation.
151
+ height (`int`, *optional*, defaults to 512):
152
+ The height in pixels of the generated image.
153
+ width (`int`, *optional*, defaults to 512):
154
+ The width in pixels of the generated image.
155
+ num_inference_steps (`int`, *optional*, defaults to 50):
156
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
157
+ expense of slower inference.
158
+ guidance_scale (`float`, *optional*, defaults to 7.5):
159
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
160
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
161
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
162
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
163
+ usually at the expense of lower image quality.
164
+ eta (`float`, *optional*, defaults to 0.0):
165
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
166
+ [`schedulers.DDIMScheduler`], will be ignored for others.
167
+ generator (`torch.Generator`, *optional*):
168
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
169
+ deterministic.
170
+ latents (`torch.FloatTensor`, *optional*):
171
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
172
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
173
+ tensor will ge generated by sampling using the supplied random `generator`.
174
+ output_type (`str`, *optional*, defaults to `"pil"`):
175
+ The output format of the generate image. Choose between
176
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
177
+ return_dict (`bool`, *optional*, defaults to `True`):
178
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
179
+ plain tuple.
180
+ Returns:
181
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
182
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
183
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
184
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
185
+ (nsfw) content, according to the `safety_checker`.
186
+ """
187
+ message = "Please use `image` instead of `init_image`."
188
+ init_image = deprecate("init_image", "0.12.0", message, take_from=kwargs)
189
+ image = init_image or image
190
+
191
+ accelerator = Accelerator(
192
+ gradient_accumulation_steps=1,
193
+ mixed_precision="fp16",
194
+ )
195
+
196
+ if "torch_device" in kwargs:
197
+ device = kwargs.pop("torch_device")
198
+ warnings.warn(
199
+ "`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0."
200
+ " Consider using `pipe.to(torch_device)` instead."
201
+ )
202
+
203
+ if device is None:
204
+ device = "cuda" if torch.cuda.is_available() else "cpu"
205
+ self.to(device)
206
+
207
+ if height % 8 != 0 or width % 8 != 0:
208
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
209
+
210
+ # Freeze vae and unet
211
+ self.vae.requires_grad_(False)
212
+ self.unet.requires_grad_(False)
213
+ self.text_encoder.requires_grad_(False)
214
+ self.unet.eval()
215
+ self.vae.eval()
216
+ self.text_encoder.eval()
217
+
218
+ if accelerator.is_main_process:
219
+ accelerator.init_trackers(
220
+ "imagic",
221
+ config={
222
+ "embedding_learning_rate": embedding_learning_rate,
223
+ "text_embedding_optimization_steps": text_embedding_optimization_steps,
224
+ },
225
+ )
226
+
227
+ # get text embeddings for prompt
228
+ text_input = self.tokenizer(
229
+ prompt,
230
+ padding="max_length",
231
+ max_length=self.tokenizer.model_max_length,
232
+ truncaton=True,
233
+ return_tensors="pt",
234
+ )
235
+ text_embeddings = torch.nn.Parameter(
236
+ self.text_encoder(text_input.input_ids.to(self.device))[0], requires_grad=True
237
+ )
238
+ text_embeddings = text_embeddings.detach()
239
+ text_embeddings.requires_grad_()
240
+ text_embeddings_orig = text_embeddings.clone()
241
+
242
+ # Initialize the optimizer
243
+ optimizer = torch.optim.Adam(
244
+ [text_embeddings], # only optimize the embeddings
245
+ lr=embedding_learning_rate,
246
+ )
247
+
248
+ if isinstance(image, PIL.Image.Image):
249
+ image = preprocess(image)
250
+
251
+ latents_dtype = text_embeddings.dtype
252
+ image = image.to(device=self.device, dtype=latents_dtype)
253
+ init_latent_image_dist = self.vae.encode(image).latent_dist
254
+ image_latents = init_latent_image_dist.sample(generator=generator)
255
+ image_latents = 0.18215 * image_latents
256
+
257
+ progress_bar = tqdm(range(text_embedding_optimization_steps), disable=not accelerator.is_local_main_process)
258
+ progress_bar.set_description("Steps")
259
+
260
+ global_step = 0
261
+
262
+ logger.info("First optimizing the text embedding to better reconstruct the init image")
263
+ for _ in range(text_embedding_optimization_steps):
264
+ with accelerator.accumulate(text_embeddings):
265
+ # Sample noise that we'll add to the latents
266
+ noise = torch.randn(image_latents.shape).to(image_latents.device)
267
+ timesteps = torch.randint(1000, (1,), device=image_latents.device)
268
+
269
+ # Add noise to the latents according to the noise magnitude at each timestep
270
+ # (this is the forward diffusion process)
271
+ noisy_latents = self.scheduler.add_noise(image_latents, noise, timesteps)
272
+
273
+ # Predict the noise residual
274
+ noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample
275
+
276
+ loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
277
+ accelerator.backward(loss)
278
+
279
+ optimizer.step()
280
+ optimizer.zero_grad()
281
+
282
+ # Checks if the accelerator has performed an optimization step behind the scenes
283
+ if accelerator.sync_gradients:
284
+ progress_bar.update(1)
285
+ global_step += 1
286
+
287
+ logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]}
288
+ progress_bar.set_postfix(**logs)
289
+ accelerator.log(logs, step=global_step)
290
+
291
+ accelerator.wait_for_everyone()
292
+
293
+ text_embeddings.requires_grad_(False)
294
+
295
+ # Now we fine tune the unet to better reconstruct the image
296
+ self.unet.requires_grad_(True)
297
+ self.unet.train()
298
+ optimizer = torch.optim.Adam(
299
+ self.unet.parameters(), # only optimize unet
300
+ lr=diffusion_model_learning_rate,
301
+ )
302
+ progress_bar = tqdm(range(model_fine_tuning_optimization_steps), disable=not accelerator.is_local_main_process)
303
+
304
+ logger.info("Next fine tuning the entire model to better reconstruct the init image")
305
+ for _ in range(model_fine_tuning_optimization_steps):
306
+ with accelerator.accumulate(self.unet.parameters()):
307
+ # Sample noise that we'll add to the latents
308
+ noise = torch.randn(image_latents.shape).to(image_latents.device)
309
+ timesteps = torch.randint(1000, (1,), device=image_latents.device)
310
+
311
+ # Add noise to the latents according to the noise magnitude at each timestep
312
+ # (this is the forward diffusion process)
313
+ noisy_latents = self.scheduler.add_noise(image_latents, noise, timesteps)
314
+
315
+ # Predict the noise residual
316
+ noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample
317
+
318
+ loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
319
+ accelerator.backward(loss)
320
+
321
+ optimizer.step()
322
+ optimizer.zero_grad()
323
+
324
+ # Checks if the accelerator has performed an optimization step behind the scenes
325
+ if accelerator.sync_gradients:
326
+ progress_bar.update(1)
327
+ global_step += 1
328
+
329
+ logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]}
330
+ progress_bar.set_postfix(**logs)
331
+ accelerator.log(logs, step=global_step)
332
+
333
+ accelerator.wait_for_everyone()
334
+ self.text_embeddings_orig = text_embeddings_orig
335
+ self.text_embeddings = text_embeddings
336
+
337
+ @torch.no_grad()
338
+ def __call__(
339
+ self,
340
+ alpha: float = 1.2,
341
+ height: Optional[int] = 512,
342
+ width: Optional[int] = 512,
343
+ num_inference_steps: Optional[int] = 50,
344
+ generator: Optional[torch.Generator] = None,
345
+ output_type: Optional[str] = "pil",
346
+ return_dict: bool = True,
347
+ guidance_scale: float = 7.5,
348
+ eta: float = 0.0,
349
+ **kwargs,
350
+ ):
351
+ r"""
352
+ Function invoked when calling the pipeline for generation.
353
+ Args:
354
+ prompt (`str` or `List[str]`):
355
+ The prompt or prompts to guide the image generation.
356
+ height (`int`, *optional*, defaults to 512):
357
+ The height in pixels of the generated image.
358
+ width (`int`, *optional*, defaults to 512):
359
+ The width in pixels of the generated image.
360
+ num_inference_steps (`int`, *optional*, defaults to 50):
361
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
362
+ expense of slower inference.
363
+ guidance_scale (`float`, *optional*, defaults to 7.5):
364
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
365
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
366
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
367
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
368
+ usually at the expense of lower image quality.
369
+ eta (`float`, *optional*, defaults to 0.0):
370
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
371
+ [`schedulers.DDIMScheduler`], will be ignored for others.
372
+ generator (`torch.Generator`, *optional*):
373
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
374
+ deterministic.
375
+ latents (`torch.FloatTensor`, *optional*):
376
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
377
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
378
+ tensor will ge generated by sampling using the supplied random `generator`.
379
+ output_type (`str`, *optional*, defaults to `"pil"`):
380
+ The output format of the generate image. Choose between
381
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
382
+ return_dict (`bool`, *optional*, defaults to `True`):
383
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
384
+ plain tuple.
385
+ Returns:
386
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
387
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
388
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
389
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
390
+ (nsfw) content, according to the `safety_checker`.
391
+ """
392
+ if height % 8 != 0 or width % 8 != 0:
393
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
394
+ if self.text_embeddings is None:
395
+ raise ValueError("Please run the pipe.train() before trying to generate an image.")
396
+ if self.text_embeddings_orig is None:
397
+ raise ValueError("Please run the pipe.train() before trying to generate an image.")
398
+
399
+ text_embeddings = alpha * self.text_embeddings_orig + (1 - alpha) * self.text_embeddings
400
+
401
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
402
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
403
+ # corresponds to doing no classifier free guidance.
404
+ do_classifier_free_guidance = guidance_scale > 1.0
405
+ # get unconditional embeddings for classifier free guidance
406
+ if do_classifier_free_guidance:
407
+ uncond_tokens = [""]
408
+ max_length = self.tokenizer.model_max_length
409
+ uncond_input = self.tokenizer(
410
+ uncond_tokens,
411
+ padding="max_length",
412
+ max_length=max_length,
413
+ truncation=True,
414
+ return_tensors="pt",
415
+ )
416
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
417
+
418
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
419
+ seq_len = uncond_embeddings.shape[1]
420
+ uncond_embeddings = uncond_embeddings.view(1, seq_len, -1)
421
+
422
+ # For classifier free guidance, we need to do two forward passes.
423
+ # Here we concatenate the unconditional and text embeddings into a single batch
424
+ # to avoid doing two forward passes
425
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
426
+
427
+ # get the initial random noise unless the user supplied it
428
+
429
+ # Unlike in other pipelines, latents need to be generated in the target device
430
+ # for 1-to-1 results reproducibility with the CompVis implementation.
431
+ # However this currently doesn't work in `mps`.
432
+ latents_shape = (1, self.unet.in_channels, height // 8, width // 8)
433
+ latents_dtype = text_embeddings.dtype
434
+ if self.device.type == "mps":
435
+ # randn does not exist on mps
436
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
437
+ self.device
438
+ )
439
+ else:
440
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
441
+
442
+ # set timesteps
443
+ self.scheduler.set_timesteps(num_inference_steps)
444
+
445
+ # Some schedulers like PNDM have timesteps as arrays
446
+ # It's more optimized to move all timesteps to correct device beforehand
447
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
448
+
449
+ # scale the initial noise by the standard deviation required by the scheduler
450
+ latents = latents * self.scheduler.init_noise_sigma
451
+
452
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
453
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
454
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
455
+ # and should be between [0, 1]
456
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
457
+ extra_step_kwargs = {}
458
+ if accepts_eta:
459
+ extra_step_kwargs["eta"] = eta
460
+
461
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
462
+ # expand the latents if we are doing classifier free guidance
463
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
464
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
465
+
466
+ # predict the noise residual
467
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
468
+
469
+ # perform guidance
470
+ if do_classifier_free_guidance:
471
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
472
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
473
+
474
+ # compute the previous noisy sample x_t -> x_t-1
475
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
476
+
477
+ latents = 1 / 0.18215 * latents
478
+ image = self.vae.decode(latents).sample
479
+
480
+ image = (image / 2 + 0.5).clamp(0, 1)
481
+
482
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
483
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
484
+
485
+ if self.safety_checker is not None:
486
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
487
+ self.device
488
+ )
489
+ image, has_nsfw_concept = self.safety_checker(
490
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
491
+ )
492
+ else:
493
+ has_nsfw_concept = None
494
+
495
+ if output_type == "pil":
496
+ image = self.numpy_to_pil(image)
497
+
498
+ if not return_dict:
499
+ return (image, has_nsfw_concept)
500
+
501
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.11.1/img2img_inpainting.py ADDED
@@ -0,0 +1,463 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import Callable, List, Optional, Tuple, Union
3
+
4
+ import numpy as np
5
+ import torch
6
+
7
+ import PIL
8
+ from diffusers.configuration_utils import FrozenDict
9
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
10
+ from diffusers.pipeline_utils import DiffusionPipeline
11
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
12
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
13
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
14
+ from diffusers.utils import deprecate, logging
15
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
16
+
17
+
18
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
19
+
20
+
21
+ def prepare_mask_and_masked_image(image, mask):
22
+ image = np.array(image.convert("RGB"))
23
+ image = image[None].transpose(0, 3, 1, 2)
24
+ image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0
25
+
26
+ mask = np.array(mask.convert("L"))
27
+ mask = mask.astype(np.float32) / 255.0
28
+ mask = mask[None, None]
29
+ mask[mask < 0.5] = 0
30
+ mask[mask >= 0.5] = 1
31
+ mask = torch.from_numpy(mask)
32
+
33
+ masked_image = image * (mask < 0.5)
34
+
35
+ return mask, masked_image
36
+
37
+
38
+ def check_size(image, height, width):
39
+ if isinstance(image, PIL.Image.Image):
40
+ w, h = image.size
41
+ elif isinstance(image, torch.Tensor):
42
+ *_, h, w = image.shape
43
+
44
+ if h != height or w != width:
45
+ raise ValueError(f"Image size should be {height}x{width}, but got {h}x{w}")
46
+
47
+
48
+ def overlay_inner_image(image, inner_image, paste_offset: Tuple[int] = (0, 0)):
49
+ inner_image = inner_image.convert("RGBA")
50
+ image = image.convert("RGB")
51
+
52
+ image.paste(inner_image, paste_offset, inner_image)
53
+ image = image.convert("RGB")
54
+
55
+ return image
56
+
57
+
58
+ class ImageToImageInpaintingPipeline(DiffusionPipeline):
59
+ r"""
60
+ Pipeline for text-guided image-to-image inpainting using Stable Diffusion. *This is an experimental feature*.
61
+
62
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
63
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
64
+
65
+ Args:
66
+ vae ([`AutoencoderKL`]):
67
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
68
+ text_encoder ([`CLIPTextModel`]):
69
+ Frozen text-encoder. Stable Diffusion uses the text portion of
70
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
71
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
72
+ tokenizer (`CLIPTokenizer`):
73
+ Tokenizer of class
74
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
75
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
76
+ scheduler ([`SchedulerMixin`]):
77
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
78
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
79
+ safety_checker ([`StableDiffusionSafetyChecker`]):
80
+ Classification module that estimates whether generated images could be considered offensive or harmful.
81
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
82
+ feature_extractor ([`CLIPFeatureExtractor`]):
83
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
84
+ """
85
+
86
+ def __init__(
87
+ self,
88
+ vae: AutoencoderKL,
89
+ text_encoder: CLIPTextModel,
90
+ tokenizer: CLIPTokenizer,
91
+ unet: UNet2DConditionModel,
92
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
93
+ safety_checker: StableDiffusionSafetyChecker,
94
+ feature_extractor: CLIPFeatureExtractor,
95
+ ):
96
+ super().__init__()
97
+
98
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
99
+ deprecation_message = (
100
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
101
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
102
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
103
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
104
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
105
+ " file"
106
+ )
107
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
108
+ new_config = dict(scheduler.config)
109
+ new_config["steps_offset"] = 1
110
+ scheduler._internal_dict = FrozenDict(new_config)
111
+
112
+ if safety_checker is None:
113
+ logger.warning(
114
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
115
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
116
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
117
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
118
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
119
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
120
+ )
121
+
122
+ self.register_modules(
123
+ vae=vae,
124
+ text_encoder=text_encoder,
125
+ tokenizer=tokenizer,
126
+ unet=unet,
127
+ scheduler=scheduler,
128
+ safety_checker=safety_checker,
129
+ feature_extractor=feature_extractor,
130
+ )
131
+
132
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
133
+ r"""
134
+ Enable sliced attention computation.
135
+
136
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
137
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
138
+
139
+ Args:
140
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
141
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
142
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
143
+ `attention_head_dim` must be a multiple of `slice_size`.
144
+ """
145
+ if slice_size == "auto":
146
+ # half the attention head size is usually a good trade-off between
147
+ # speed and memory
148
+ slice_size = self.unet.config.attention_head_dim // 2
149
+ self.unet.set_attention_slice(slice_size)
150
+
151
+ def disable_attention_slicing(self):
152
+ r"""
153
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
154
+ back to computing attention in one step.
155
+ """
156
+ # set slice_size = `None` to disable `attention slicing`
157
+ self.enable_attention_slicing(None)
158
+
159
+ @torch.no_grad()
160
+ def __call__(
161
+ self,
162
+ prompt: Union[str, List[str]],
163
+ image: Union[torch.FloatTensor, PIL.Image.Image],
164
+ inner_image: Union[torch.FloatTensor, PIL.Image.Image],
165
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image],
166
+ height: int = 512,
167
+ width: int = 512,
168
+ num_inference_steps: int = 50,
169
+ guidance_scale: float = 7.5,
170
+ negative_prompt: Optional[Union[str, List[str]]] = None,
171
+ num_images_per_prompt: Optional[int] = 1,
172
+ eta: float = 0.0,
173
+ generator: Optional[torch.Generator] = None,
174
+ latents: Optional[torch.FloatTensor] = None,
175
+ output_type: Optional[str] = "pil",
176
+ return_dict: bool = True,
177
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
178
+ callback_steps: Optional[int] = 1,
179
+ **kwargs,
180
+ ):
181
+ r"""
182
+ Function invoked when calling the pipeline for generation.
183
+
184
+ Args:
185
+ prompt (`str` or `List[str]`):
186
+ The prompt or prompts to guide the image generation.
187
+ image (`torch.Tensor` or `PIL.Image.Image`):
188
+ `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
189
+ be masked out with `mask_image` and repainted according to `prompt`.
190
+ inner_image (`torch.Tensor` or `PIL.Image.Image`):
191
+ `Image`, or tensor representing an image batch which will be overlayed onto `image`. Non-transparent
192
+ regions of `inner_image` must fit inside white pixels in `mask_image`. Expects four channels, with
193
+ the last channel representing the alpha channel, which will be used to blend `inner_image` with
194
+ `image`. If not provided, it will be forcibly cast to RGBA.
195
+ mask_image (`PIL.Image.Image`):
196
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
197
+ repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
198
+ to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
199
+ instead of 3, so the expected shape would be `(B, H, W, 1)`.
200
+ height (`int`, *optional*, defaults to 512):
201
+ The height in pixels of the generated image.
202
+ width (`int`, *optional*, defaults to 512):
203
+ The width in pixels of the generated image.
204
+ num_inference_steps (`int`, *optional*, defaults to 50):
205
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
206
+ expense of slower inference.
207
+ guidance_scale (`float`, *optional*, defaults to 7.5):
208
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
209
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
210
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
211
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
212
+ usually at the expense of lower image quality.
213
+ negative_prompt (`str` or `List[str]`, *optional*):
214
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
215
+ if `guidance_scale` is less than `1`).
216
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
217
+ The number of images to generate per prompt.
218
+ eta (`float`, *optional*, defaults to 0.0):
219
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
220
+ [`schedulers.DDIMScheduler`], will be ignored for others.
221
+ generator (`torch.Generator`, *optional*):
222
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
223
+ deterministic.
224
+ latents (`torch.FloatTensor`, *optional*):
225
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
226
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
227
+ tensor will ge generated by sampling using the supplied random `generator`.
228
+ output_type (`str`, *optional*, defaults to `"pil"`):
229
+ The output format of the generate image. Choose between
230
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
231
+ return_dict (`bool`, *optional*, defaults to `True`):
232
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
233
+ plain tuple.
234
+ callback (`Callable`, *optional*):
235
+ A function that will be called every `callback_steps` steps during inference. The function will be
236
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
237
+ callback_steps (`int`, *optional*, defaults to 1):
238
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
239
+ called at every step.
240
+
241
+ Returns:
242
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
243
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
244
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
245
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
246
+ (nsfw) content, according to the `safety_checker`.
247
+ """
248
+
249
+ if isinstance(prompt, str):
250
+ batch_size = 1
251
+ elif isinstance(prompt, list):
252
+ batch_size = len(prompt)
253
+ else:
254
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
255
+
256
+ if height % 8 != 0 or width % 8 != 0:
257
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
258
+
259
+ if (callback_steps is None) or (
260
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
261
+ ):
262
+ raise ValueError(
263
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
264
+ f" {type(callback_steps)}."
265
+ )
266
+
267
+ # check if input sizes are correct
268
+ check_size(image, height, width)
269
+ check_size(inner_image, height, width)
270
+ check_size(mask_image, height, width)
271
+
272
+ # get prompt text embeddings
273
+ text_inputs = self.tokenizer(
274
+ prompt,
275
+ padding="max_length",
276
+ max_length=self.tokenizer.model_max_length,
277
+ return_tensors="pt",
278
+ )
279
+ text_input_ids = text_inputs.input_ids
280
+
281
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
282
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
283
+ logger.warning(
284
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
285
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
286
+ )
287
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
288
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
289
+
290
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
291
+ bs_embed, seq_len, _ = text_embeddings.shape
292
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
293
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
294
+
295
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
296
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
297
+ # corresponds to doing no classifier free guidance.
298
+ do_classifier_free_guidance = guidance_scale > 1.0
299
+ # get unconditional embeddings for classifier free guidance
300
+ if do_classifier_free_guidance:
301
+ uncond_tokens: List[str]
302
+ if negative_prompt is None:
303
+ uncond_tokens = [""]
304
+ elif type(prompt) is not type(negative_prompt):
305
+ raise TypeError(
306
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
307
+ f" {type(prompt)}."
308
+ )
309
+ elif isinstance(negative_prompt, str):
310
+ uncond_tokens = [negative_prompt]
311
+ elif batch_size != len(negative_prompt):
312
+ raise ValueError(
313
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
314
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
315
+ " the batch size of `prompt`."
316
+ )
317
+ else:
318
+ uncond_tokens = negative_prompt
319
+
320
+ max_length = text_input_ids.shape[-1]
321
+ uncond_input = self.tokenizer(
322
+ uncond_tokens,
323
+ padding="max_length",
324
+ max_length=max_length,
325
+ truncation=True,
326
+ return_tensors="pt",
327
+ )
328
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
329
+
330
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
331
+ seq_len = uncond_embeddings.shape[1]
332
+ uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1)
333
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
334
+
335
+ # For classifier free guidance, we need to do two forward passes.
336
+ # Here we concatenate the unconditional and text embeddings into a single batch
337
+ # to avoid doing two forward passes
338
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
339
+
340
+ # get the initial random noise unless the user supplied it
341
+ # Unlike in other pipelines, latents need to be generated in the target device
342
+ # for 1-to-1 results reproducibility with the CompVis implementation.
343
+ # However this currently doesn't work in `mps`.
344
+ num_channels_latents = self.vae.config.latent_channels
345
+ latents_shape = (batch_size * num_images_per_prompt, num_channels_latents, height // 8, width // 8)
346
+ latents_dtype = text_embeddings.dtype
347
+ if latents is None:
348
+ if self.device.type == "mps":
349
+ # randn does not exist on mps
350
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
351
+ self.device
352
+ )
353
+ else:
354
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
355
+ else:
356
+ if latents.shape != latents_shape:
357
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
358
+ latents = latents.to(self.device)
359
+
360
+ # overlay the inner image
361
+ image = overlay_inner_image(image, inner_image)
362
+
363
+ # prepare mask and masked_image
364
+ mask, masked_image = prepare_mask_and_masked_image(image, mask_image)
365
+ mask = mask.to(device=self.device, dtype=text_embeddings.dtype)
366
+ masked_image = masked_image.to(device=self.device, dtype=text_embeddings.dtype)
367
+
368
+ # resize the mask to latents shape as we concatenate the mask to the latents
369
+ mask = torch.nn.functional.interpolate(mask, size=(height // 8, width // 8))
370
+
371
+ # encode the mask image into latents space so we can concatenate it to the latents
372
+ masked_image_latents = self.vae.encode(masked_image).latent_dist.sample(generator=generator)
373
+ masked_image_latents = 0.18215 * masked_image_latents
374
+
375
+ # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method
376
+ mask = mask.repeat(batch_size * num_images_per_prompt, 1, 1, 1)
377
+ masked_image_latents = masked_image_latents.repeat(batch_size * num_images_per_prompt, 1, 1, 1)
378
+
379
+ mask = torch.cat([mask] * 2) if do_classifier_free_guidance else mask
380
+ masked_image_latents = (
381
+ torch.cat([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents
382
+ )
383
+
384
+ num_channels_mask = mask.shape[1]
385
+ num_channels_masked_image = masked_image_latents.shape[1]
386
+
387
+ if num_channels_latents + num_channels_mask + num_channels_masked_image != self.unet.config.in_channels:
388
+ raise ValueError(
389
+ f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects"
390
+ f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +"
391
+ f" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}"
392
+ f" = {num_channels_latents+num_channels_masked_image+num_channels_mask}. Please verify the config of"
393
+ " `pipeline.unet` or your `mask_image` or `image` input."
394
+ )
395
+
396
+ # set timesteps
397
+ self.scheduler.set_timesteps(num_inference_steps)
398
+
399
+ # Some schedulers like PNDM have timesteps as arrays
400
+ # It's more optimized to move all timesteps to correct device beforehand
401
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
402
+
403
+ # scale the initial noise by the standard deviation required by the scheduler
404
+ latents = latents * self.scheduler.init_noise_sigma
405
+
406
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
407
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
408
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
409
+ # and should be between [0, 1]
410
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
411
+ extra_step_kwargs = {}
412
+ if accepts_eta:
413
+ extra_step_kwargs["eta"] = eta
414
+
415
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
416
+ # expand the latents if we are doing classifier free guidance
417
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
418
+
419
+ # concat latents, mask, masked_image_latents in the channel dimension
420
+ latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)
421
+
422
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
423
+
424
+ # predict the noise residual
425
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
426
+
427
+ # perform guidance
428
+ if do_classifier_free_guidance:
429
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
430
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
431
+
432
+ # compute the previous noisy sample x_t -> x_t-1
433
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
434
+
435
+ # call the callback, if provided
436
+ if callback is not None and i % callback_steps == 0:
437
+ callback(i, t, latents)
438
+
439
+ latents = 1 / 0.18215 * latents
440
+ image = self.vae.decode(latents).sample
441
+
442
+ image = (image / 2 + 0.5).clamp(0, 1)
443
+
444
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
445
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
446
+
447
+ if self.safety_checker is not None:
448
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
449
+ self.device
450
+ )
451
+ image, has_nsfw_concept = self.safety_checker(
452
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
453
+ )
454
+ else:
455
+ has_nsfw_concept = None
456
+
457
+ if output_type == "pil":
458
+ image = self.numpy_to_pil(image)
459
+
460
+ if not return_dict:
461
+ return (image, has_nsfw_concept)
462
+
463
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.11.1/interpolate_stable_diffusion.py ADDED
@@ -0,0 +1,524 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import time
3
+ from pathlib import Path
4
+ from typing import Callable, List, Optional, Union
5
+
6
+ import numpy as np
7
+ import torch
8
+
9
+ from diffusers.configuration_utils import FrozenDict
10
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
11
+ from diffusers.pipeline_utils import DiffusionPipeline
12
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
13
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
14
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
15
+ from diffusers.utils import deprecate, logging
16
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
17
+
18
+
19
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
20
+
21
+
22
+ def slerp(t, v0, v1, DOT_THRESHOLD=0.9995):
23
+ """helper function to spherically interpolate two arrays v1 v2"""
24
+
25
+ if not isinstance(v0, np.ndarray):
26
+ inputs_are_torch = True
27
+ input_device = v0.device
28
+ v0 = v0.cpu().numpy()
29
+ v1 = v1.cpu().numpy()
30
+
31
+ dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
32
+ if np.abs(dot) > DOT_THRESHOLD:
33
+ v2 = (1 - t) * v0 + t * v1
34
+ else:
35
+ theta_0 = np.arccos(dot)
36
+ sin_theta_0 = np.sin(theta_0)
37
+ theta_t = theta_0 * t
38
+ sin_theta_t = np.sin(theta_t)
39
+ s0 = np.sin(theta_0 - theta_t) / sin_theta_0
40
+ s1 = sin_theta_t / sin_theta_0
41
+ v2 = s0 * v0 + s1 * v1
42
+
43
+ if inputs_are_torch:
44
+ v2 = torch.from_numpy(v2).to(input_device)
45
+
46
+ return v2
47
+
48
+
49
+ class StableDiffusionWalkPipeline(DiffusionPipeline):
50
+ r"""
51
+ Pipeline for text-to-image generation using Stable Diffusion.
52
+
53
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
54
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
55
+
56
+ Args:
57
+ vae ([`AutoencoderKL`]):
58
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
59
+ text_encoder ([`CLIPTextModel`]):
60
+ Frozen text-encoder. Stable Diffusion uses the text portion of
61
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
62
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
63
+ tokenizer (`CLIPTokenizer`):
64
+ Tokenizer of class
65
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
66
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
67
+ scheduler ([`SchedulerMixin`]):
68
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
69
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
70
+ safety_checker ([`StableDiffusionSafetyChecker`]):
71
+ Classification module that estimates whether generated images could be considered offensive or harmful.
72
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
73
+ feature_extractor ([`CLIPFeatureExtractor`]):
74
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
75
+ """
76
+
77
+ def __init__(
78
+ self,
79
+ vae: AutoencoderKL,
80
+ text_encoder: CLIPTextModel,
81
+ tokenizer: CLIPTokenizer,
82
+ unet: UNet2DConditionModel,
83
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
84
+ safety_checker: StableDiffusionSafetyChecker,
85
+ feature_extractor: CLIPFeatureExtractor,
86
+ ):
87
+ super().__init__()
88
+
89
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
90
+ deprecation_message = (
91
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
92
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
93
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
94
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
95
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
96
+ " file"
97
+ )
98
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
99
+ new_config = dict(scheduler.config)
100
+ new_config["steps_offset"] = 1
101
+ scheduler._internal_dict = FrozenDict(new_config)
102
+
103
+ if safety_checker is None:
104
+ logger.warning(
105
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
106
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
107
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
108
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
109
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
110
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
111
+ )
112
+
113
+ self.register_modules(
114
+ vae=vae,
115
+ text_encoder=text_encoder,
116
+ tokenizer=tokenizer,
117
+ unet=unet,
118
+ scheduler=scheduler,
119
+ safety_checker=safety_checker,
120
+ feature_extractor=feature_extractor,
121
+ )
122
+
123
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
124
+ r"""
125
+ Enable sliced attention computation.
126
+
127
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
128
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
129
+
130
+ Args:
131
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
132
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
133
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
134
+ `attention_head_dim` must be a multiple of `slice_size`.
135
+ """
136
+ if slice_size == "auto":
137
+ # half the attention head size is usually a good trade-off between
138
+ # speed and memory
139
+ slice_size = self.unet.config.attention_head_dim // 2
140
+ self.unet.set_attention_slice(slice_size)
141
+
142
+ def disable_attention_slicing(self):
143
+ r"""
144
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
145
+ back to computing attention in one step.
146
+ """
147
+ # set slice_size = `None` to disable `attention slicing`
148
+ self.enable_attention_slicing(None)
149
+
150
+ @torch.no_grad()
151
+ def __call__(
152
+ self,
153
+ prompt: Optional[Union[str, List[str]]] = None,
154
+ height: int = 512,
155
+ width: int = 512,
156
+ num_inference_steps: int = 50,
157
+ guidance_scale: float = 7.5,
158
+ negative_prompt: Optional[Union[str, List[str]]] = None,
159
+ num_images_per_prompt: Optional[int] = 1,
160
+ eta: float = 0.0,
161
+ generator: Optional[torch.Generator] = None,
162
+ latents: Optional[torch.FloatTensor] = None,
163
+ output_type: Optional[str] = "pil",
164
+ return_dict: bool = True,
165
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
166
+ callback_steps: Optional[int] = 1,
167
+ text_embeddings: Optional[torch.FloatTensor] = None,
168
+ **kwargs,
169
+ ):
170
+ r"""
171
+ Function invoked when calling the pipeline for generation.
172
+
173
+ Args:
174
+ prompt (`str` or `List[str]`, *optional*, defaults to `None`):
175
+ The prompt or prompts to guide the image generation. If not provided, `text_embeddings` is required.
176
+ height (`int`, *optional*, defaults to 512):
177
+ The height in pixels of the generated image.
178
+ width (`int`, *optional*, defaults to 512):
179
+ The width in pixels of the generated image.
180
+ num_inference_steps (`int`, *optional*, defaults to 50):
181
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
182
+ expense of slower inference.
183
+ guidance_scale (`float`, *optional*, defaults to 7.5):
184
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
185
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
186
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
187
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
188
+ usually at the expense of lower image quality.
189
+ negative_prompt (`str` or `List[str]`, *optional*):
190
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
191
+ if `guidance_scale` is less than `1`).
192
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
193
+ The number of images to generate per prompt.
194
+ eta (`float`, *optional*, defaults to 0.0):
195
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
196
+ [`schedulers.DDIMScheduler`], will be ignored for others.
197
+ generator (`torch.Generator`, *optional*):
198
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
199
+ deterministic.
200
+ latents (`torch.FloatTensor`, *optional*):
201
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
202
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
203
+ tensor will ge generated by sampling using the supplied random `generator`.
204
+ output_type (`str`, *optional*, defaults to `"pil"`):
205
+ The output format of the generate image. Choose between
206
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
207
+ return_dict (`bool`, *optional*, defaults to `True`):
208
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
209
+ plain tuple.
210
+ callback (`Callable`, *optional*):
211
+ A function that will be called every `callback_steps` steps during inference. The function will be
212
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
213
+ callback_steps (`int`, *optional*, defaults to 1):
214
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
215
+ called at every step.
216
+ text_embeddings (`torch.FloatTensor`, *optional*, defaults to `None`):
217
+ Pre-generated text embeddings to be used as inputs for image generation. Can be used in place of
218
+ `prompt` to avoid re-computing the embeddings. If not provided, the embeddings will be generated from
219
+ the supplied `prompt`.
220
+
221
+ Returns:
222
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
223
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
224
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
225
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
226
+ (nsfw) content, according to the `safety_checker`.
227
+ """
228
+
229
+ if height % 8 != 0 or width % 8 != 0:
230
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
231
+
232
+ if (callback_steps is None) or (
233
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
234
+ ):
235
+ raise ValueError(
236
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
237
+ f" {type(callback_steps)}."
238
+ )
239
+
240
+ if text_embeddings is None:
241
+ if isinstance(prompt, str):
242
+ batch_size = 1
243
+ elif isinstance(prompt, list):
244
+ batch_size = len(prompt)
245
+ else:
246
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
247
+
248
+ # get prompt text embeddings
249
+ text_inputs = self.tokenizer(
250
+ prompt,
251
+ padding="max_length",
252
+ max_length=self.tokenizer.model_max_length,
253
+ return_tensors="pt",
254
+ )
255
+ text_input_ids = text_inputs.input_ids
256
+
257
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
258
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
259
+ print(
260
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
261
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
262
+ )
263
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
264
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
265
+ else:
266
+ batch_size = text_embeddings.shape[0]
267
+
268
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
269
+ bs_embed, seq_len, _ = text_embeddings.shape
270
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
271
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
272
+
273
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
274
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
275
+ # corresponds to doing no classifier free guidance.
276
+ do_classifier_free_guidance = guidance_scale > 1.0
277
+ # get unconditional embeddings for classifier free guidance
278
+ if do_classifier_free_guidance:
279
+ uncond_tokens: List[str]
280
+ if negative_prompt is None:
281
+ uncond_tokens = [""] * batch_size
282
+ elif type(prompt) is not type(negative_prompt):
283
+ raise TypeError(
284
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
285
+ f" {type(prompt)}."
286
+ )
287
+ elif isinstance(negative_prompt, str):
288
+ uncond_tokens = [negative_prompt]
289
+ elif batch_size != len(negative_prompt):
290
+ raise ValueError(
291
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
292
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
293
+ " the batch size of `prompt`."
294
+ )
295
+ else:
296
+ uncond_tokens = negative_prompt
297
+
298
+ max_length = self.tokenizer.model_max_length
299
+ uncond_input = self.tokenizer(
300
+ uncond_tokens,
301
+ padding="max_length",
302
+ max_length=max_length,
303
+ truncation=True,
304
+ return_tensors="pt",
305
+ )
306
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
307
+
308
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
309
+ seq_len = uncond_embeddings.shape[1]
310
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
311
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
312
+
313
+ # For classifier free guidance, we need to do two forward passes.
314
+ # Here we concatenate the unconditional and text embeddings into a single batch
315
+ # to avoid doing two forward passes
316
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
317
+
318
+ # get the initial random noise unless the user supplied it
319
+
320
+ # Unlike in other pipelines, latents need to be generated in the target device
321
+ # for 1-to-1 results reproducibility with the CompVis implementation.
322
+ # However this currently doesn't work in `mps`.
323
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
324
+ latents_dtype = text_embeddings.dtype
325
+ if latents is None:
326
+ if self.device.type == "mps":
327
+ # randn does not work reproducibly on mps
328
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
329
+ self.device
330
+ )
331
+ else:
332
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
333
+ else:
334
+ if latents.shape != latents_shape:
335
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
336
+ latents = latents.to(self.device)
337
+
338
+ # set timesteps
339
+ self.scheduler.set_timesteps(num_inference_steps)
340
+
341
+ # Some schedulers like PNDM have timesteps as arrays
342
+ # It's more optimized to move all timesteps to correct device beforehand
343
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
344
+
345
+ # scale the initial noise by the standard deviation required by the scheduler
346
+ latents = latents * self.scheduler.init_noise_sigma
347
+
348
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
349
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
350
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
351
+ # and should be between [0, 1]
352
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
353
+ extra_step_kwargs = {}
354
+ if accepts_eta:
355
+ extra_step_kwargs["eta"] = eta
356
+
357
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
358
+ # expand the latents if we are doing classifier free guidance
359
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
360
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
361
+
362
+ # predict the noise residual
363
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
364
+
365
+ # perform guidance
366
+ if do_classifier_free_guidance:
367
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
368
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
369
+
370
+ # compute the previous noisy sample x_t -> x_t-1
371
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
372
+
373
+ # call the callback, if provided
374
+ if callback is not None and i % callback_steps == 0:
375
+ callback(i, t, latents)
376
+
377
+ latents = 1 / 0.18215 * latents
378
+ image = self.vae.decode(latents).sample
379
+
380
+ image = (image / 2 + 0.5).clamp(0, 1)
381
+
382
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
383
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
384
+
385
+ if self.safety_checker is not None:
386
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
387
+ self.device
388
+ )
389
+ image, has_nsfw_concept = self.safety_checker(
390
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
391
+ )
392
+ else:
393
+ has_nsfw_concept = None
394
+
395
+ if output_type == "pil":
396
+ image = self.numpy_to_pil(image)
397
+
398
+ if not return_dict:
399
+ return (image, has_nsfw_concept)
400
+
401
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
402
+
403
+ def embed_text(self, text):
404
+ """takes in text and turns it into text embeddings"""
405
+ text_input = self.tokenizer(
406
+ text,
407
+ padding="max_length",
408
+ max_length=self.tokenizer.model_max_length,
409
+ truncation=True,
410
+ return_tensors="pt",
411
+ )
412
+ with torch.no_grad():
413
+ embed = self.text_encoder(text_input.input_ids.to(self.device))[0]
414
+ return embed
415
+
416
+ def get_noise(self, seed, dtype=torch.float32, height=512, width=512):
417
+ """Takes in random seed and returns corresponding noise vector"""
418
+ return torch.randn(
419
+ (1, self.unet.in_channels, height // 8, width // 8),
420
+ generator=torch.Generator(device=self.device).manual_seed(seed),
421
+ device=self.device,
422
+ dtype=dtype,
423
+ )
424
+
425
+ def walk(
426
+ self,
427
+ prompts: List[str],
428
+ seeds: List[int],
429
+ num_interpolation_steps: Optional[int] = 6,
430
+ output_dir: Optional[str] = "./dreams",
431
+ name: Optional[str] = None,
432
+ batch_size: Optional[int] = 1,
433
+ height: Optional[int] = 512,
434
+ width: Optional[int] = 512,
435
+ guidance_scale: Optional[float] = 7.5,
436
+ num_inference_steps: Optional[int] = 50,
437
+ eta: Optional[float] = 0.0,
438
+ ) -> List[str]:
439
+ """
440
+ Walks through a series of prompts and seeds, interpolating between them and saving the results to disk.
441
+
442
+ Args:
443
+ prompts (`List[str]`):
444
+ List of prompts to generate images for.
445
+ seeds (`List[int]`):
446
+ List of seeds corresponding to provided prompts. Must be the same length as prompts.
447
+ num_interpolation_steps (`int`, *optional*, defaults to 6):
448
+ Number of interpolation steps to take between prompts.
449
+ output_dir (`str`, *optional*, defaults to `./dreams`):
450
+ Directory to save the generated images to.
451
+ name (`str`, *optional*, defaults to `None`):
452
+ Subdirectory of `output_dir` to save the generated images to. If `None`, the name will
453
+ be the current time.
454
+ batch_size (`int`, *optional*, defaults to 1):
455
+ Number of images to generate at once.
456
+ height (`int`, *optional*, defaults to 512):
457
+ Height of the generated images.
458
+ width (`int`, *optional*, defaults to 512):
459
+ Width of the generated images.
460
+ guidance_scale (`float`, *optional*, defaults to 7.5):
461
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
462
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
463
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
464
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
465
+ usually at the expense of lower image quality.
466
+ num_inference_steps (`int`, *optional*, defaults to 50):
467
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
468
+ expense of slower inference.
469
+ eta (`float`, *optional*, defaults to 0.0):
470
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
471
+ [`schedulers.DDIMScheduler`], will be ignored for others.
472
+
473
+ Returns:
474
+ `List[str]`: List of paths to the generated images.
475
+ """
476
+ if not len(prompts) == len(seeds):
477
+ raise ValueError(
478
+ f"Number of prompts and seeds must be equalGot {len(prompts)} prompts and {len(seeds)} seeds"
479
+ )
480
+
481
+ name = name or time.strftime("%Y%m%d-%H%M%S")
482
+ save_path = Path(output_dir) / name
483
+ save_path.mkdir(exist_ok=True, parents=True)
484
+
485
+ frame_idx = 0
486
+ frame_filepaths = []
487
+ for prompt_a, prompt_b, seed_a, seed_b in zip(prompts, prompts[1:], seeds, seeds[1:]):
488
+ # Embed Text
489
+ embed_a = self.embed_text(prompt_a)
490
+ embed_b = self.embed_text(prompt_b)
491
+
492
+ # Get Noise
493
+ noise_dtype = embed_a.dtype
494
+ noise_a = self.get_noise(seed_a, noise_dtype, height, width)
495
+ noise_b = self.get_noise(seed_b, noise_dtype, height, width)
496
+
497
+ noise_batch, embeds_batch = None, None
498
+ T = np.linspace(0.0, 1.0, num_interpolation_steps)
499
+ for i, t in enumerate(T):
500
+ noise = slerp(float(t), noise_a, noise_b)
501
+ embed = torch.lerp(embed_a, embed_b, t)
502
+
503
+ noise_batch = noise if noise_batch is None else torch.cat([noise_batch, noise], dim=0)
504
+ embeds_batch = embed if embeds_batch is None else torch.cat([embeds_batch, embed], dim=0)
505
+
506
+ batch_is_ready = embeds_batch.shape[0] == batch_size or i + 1 == T.shape[0]
507
+ if batch_is_ready:
508
+ outputs = self(
509
+ latents=noise_batch,
510
+ text_embeddings=embeds_batch,
511
+ height=height,
512
+ width=width,
513
+ guidance_scale=guidance_scale,
514
+ eta=eta,
515
+ num_inference_steps=num_inference_steps,
516
+ )
517
+ noise_batch, embeds_batch = None, None
518
+
519
+ for image in outputs["images"]:
520
+ frame_filepath = str(save_path / f"frame_{frame_idx:06d}.png")
521
+ image.save(frame_filepath)
522
+ frame_filepaths.append(frame_filepath)
523
+ frame_idx += 1
524
+ return frame_filepaths
v0.11.1/lpw_stable_diffusion.py ADDED
@@ -0,0 +1,1162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import re
3
+ from typing import Callable, List, Optional, Union
4
+
5
+ import numpy as np
6
+ import torch
7
+
8
+ import diffusers
9
+ import PIL
10
+ from diffusers import SchedulerMixin, StableDiffusionPipeline
11
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
12
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput, StableDiffusionSafetyChecker
13
+ from diffusers.utils import deprecate, logging
14
+ from packaging import version
15
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
16
+
17
+
18
+ try:
19
+ from diffusers.utils import PIL_INTERPOLATION
20
+ except ImportError:
21
+ if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
22
+ PIL_INTERPOLATION = {
23
+ "linear": PIL.Image.Resampling.BILINEAR,
24
+ "bilinear": PIL.Image.Resampling.BILINEAR,
25
+ "bicubic": PIL.Image.Resampling.BICUBIC,
26
+ "lanczos": PIL.Image.Resampling.LANCZOS,
27
+ "nearest": PIL.Image.Resampling.NEAREST,
28
+ }
29
+ else:
30
+ PIL_INTERPOLATION = {
31
+ "linear": PIL.Image.LINEAR,
32
+ "bilinear": PIL.Image.BILINEAR,
33
+ "bicubic": PIL.Image.BICUBIC,
34
+ "lanczos": PIL.Image.LANCZOS,
35
+ "nearest": PIL.Image.NEAREST,
36
+ }
37
+ # ------------------------------------------------------------------------------
38
+
39
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
40
+
41
+ re_attention = re.compile(
42
+ r"""
43
+ \\\(|
44
+ \\\)|
45
+ \\\[|
46
+ \\]|
47
+ \\\\|
48
+ \\|
49
+ \(|
50
+ \[|
51
+ :([+-]?[.\d]+)\)|
52
+ \)|
53
+ ]|
54
+ [^\\()\[\]:]+|
55
+ :
56
+ """,
57
+ re.X,
58
+ )
59
+
60
+
61
+ def parse_prompt_attention(text):
62
+ """
63
+ Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
64
+ Accepted tokens are:
65
+ (abc) - increases attention to abc by a multiplier of 1.1
66
+ (abc:3.12) - increases attention to abc by a multiplier of 3.12
67
+ [abc] - decreases attention to abc by a multiplier of 1.1
68
+ \( - literal character '('
69
+ \[ - literal character '['
70
+ \) - literal character ')'
71
+ \] - literal character ']'
72
+ \\ - literal character '\'
73
+ anything else - just text
74
+ >>> parse_prompt_attention('normal text')
75
+ [['normal text', 1.0]]
76
+ >>> parse_prompt_attention('an (important) word')
77
+ [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
78
+ >>> parse_prompt_attention('(unbalanced')
79
+ [['unbalanced', 1.1]]
80
+ >>> parse_prompt_attention('\(literal\]')
81
+ [['(literal]', 1.0]]
82
+ >>> parse_prompt_attention('(unnecessary)(parens)')
83
+ [['unnecessaryparens', 1.1]]
84
+ >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
85
+ [['a ', 1.0],
86
+ ['house', 1.5730000000000004],
87
+ [' ', 1.1],
88
+ ['on', 1.0],
89
+ [' a ', 1.1],
90
+ ['hill', 0.55],
91
+ [', sun, ', 1.1],
92
+ ['sky', 1.4641000000000006],
93
+ ['.', 1.1]]
94
+ """
95
+
96
+ res = []
97
+ round_brackets = []
98
+ square_brackets = []
99
+
100
+ round_bracket_multiplier = 1.1
101
+ square_bracket_multiplier = 1 / 1.1
102
+
103
+ def multiply_range(start_position, multiplier):
104
+ for p in range(start_position, len(res)):
105
+ res[p][1] *= multiplier
106
+
107
+ for m in re_attention.finditer(text):
108
+ text = m.group(0)
109
+ weight = m.group(1)
110
+
111
+ if text.startswith("\\"):
112
+ res.append([text[1:], 1.0])
113
+ elif text == "(":
114
+ round_brackets.append(len(res))
115
+ elif text == "[":
116
+ square_brackets.append(len(res))
117
+ elif weight is not None and len(round_brackets) > 0:
118
+ multiply_range(round_brackets.pop(), float(weight))
119
+ elif text == ")" and len(round_brackets) > 0:
120
+ multiply_range(round_brackets.pop(), round_bracket_multiplier)
121
+ elif text == "]" and len(square_brackets) > 0:
122
+ multiply_range(square_brackets.pop(), square_bracket_multiplier)
123
+ else:
124
+ res.append([text, 1.0])
125
+
126
+ for pos in round_brackets:
127
+ multiply_range(pos, round_bracket_multiplier)
128
+
129
+ for pos in square_brackets:
130
+ multiply_range(pos, square_bracket_multiplier)
131
+
132
+ if len(res) == 0:
133
+ res = [["", 1.0]]
134
+
135
+ # merge runs of identical weights
136
+ i = 0
137
+ while i + 1 < len(res):
138
+ if res[i][1] == res[i + 1][1]:
139
+ res[i][0] += res[i + 1][0]
140
+ res.pop(i + 1)
141
+ else:
142
+ i += 1
143
+
144
+ return res
145
+
146
+
147
+ def get_prompts_with_weights(pipe: StableDiffusionPipeline, prompt: List[str], max_length: int):
148
+ r"""
149
+ Tokenize a list of prompts and return its tokens with weights of each token.
150
+
151
+ No padding, starting or ending token is included.
152
+ """
153
+ tokens = []
154
+ weights = []
155
+ truncated = False
156
+ for text in prompt:
157
+ texts_and_weights = parse_prompt_attention(text)
158
+ text_token = []
159
+ text_weight = []
160
+ for word, weight in texts_and_weights:
161
+ # tokenize and discard the starting and the ending token
162
+ token = pipe.tokenizer(word).input_ids[1:-1]
163
+ text_token += token
164
+ # copy the weight by length of token
165
+ text_weight += [weight] * len(token)
166
+ # stop if the text is too long (longer than truncation limit)
167
+ if len(text_token) > max_length:
168
+ truncated = True
169
+ break
170
+ # truncate
171
+ if len(text_token) > max_length:
172
+ truncated = True
173
+ text_token = text_token[:max_length]
174
+ text_weight = text_weight[:max_length]
175
+ tokens.append(text_token)
176
+ weights.append(text_weight)
177
+ if truncated:
178
+ logger.warning("Prompt was truncated. Try to shorten the prompt or increase max_embeddings_multiples")
179
+ return tokens, weights
180
+
181
+
182
+ def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, no_boseos_middle=True, chunk_length=77):
183
+ r"""
184
+ Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length.
185
+ """
186
+ max_embeddings_multiples = (max_length - 2) // (chunk_length - 2)
187
+ weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length
188
+ for i in range(len(tokens)):
189
+ tokens[i] = [bos] + tokens[i] + [eos] * (max_length - 1 - len(tokens[i]))
190
+ if no_boseos_middle:
191
+ weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i]))
192
+ else:
193
+ w = []
194
+ if len(weights[i]) == 0:
195
+ w = [1.0] * weights_length
196
+ else:
197
+ for j in range(max_embeddings_multiples):
198
+ w.append(1.0) # weight for starting token in this chunk
199
+ w += weights[i][j * (chunk_length - 2) : min(len(weights[i]), (j + 1) * (chunk_length - 2))]
200
+ w.append(1.0) # weight for ending token in this chunk
201
+ w += [1.0] * (weights_length - len(w))
202
+ weights[i] = w[:]
203
+
204
+ return tokens, weights
205
+
206
+
207
+ def get_unweighted_text_embeddings(
208
+ pipe: StableDiffusionPipeline,
209
+ text_input: torch.Tensor,
210
+ chunk_length: int,
211
+ no_boseos_middle: Optional[bool] = True,
212
+ ):
213
+ """
214
+ When the length of tokens is a multiple of the capacity of the text encoder,
215
+ it should be split into chunks and sent to the text encoder individually.
216
+ """
217
+ max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2)
218
+ if max_embeddings_multiples > 1:
219
+ text_embeddings = []
220
+ for i in range(max_embeddings_multiples):
221
+ # extract the i-th chunk
222
+ text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].clone()
223
+
224
+ # cover the head and the tail by the starting and the ending tokens
225
+ text_input_chunk[:, 0] = text_input[0, 0]
226
+ text_input_chunk[:, -1] = text_input[0, -1]
227
+ text_embedding = pipe.text_encoder(text_input_chunk)[0]
228
+
229
+ if no_boseos_middle:
230
+ if i == 0:
231
+ # discard the ending token
232
+ text_embedding = text_embedding[:, :-1]
233
+ elif i == max_embeddings_multiples - 1:
234
+ # discard the starting token
235
+ text_embedding = text_embedding[:, 1:]
236
+ else:
237
+ # discard both starting and ending tokens
238
+ text_embedding = text_embedding[:, 1:-1]
239
+
240
+ text_embeddings.append(text_embedding)
241
+ text_embeddings = torch.concat(text_embeddings, axis=1)
242
+ else:
243
+ text_embeddings = pipe.text_encoder(text_input)[0]
244
+ return text_embeddings
245
+
246
+
247
+ def get_weighted_text_embeddings(
248
+ pipe: StableDiffusionPipeline,
249
+ prompt: Union[str, List[str]],
250
+ uncond_prompt: Optional[Union[str, List[str]]] = None,
251
+ max_embeddings_multiples: Optional[int] = 3,
252
+ no_boseos_middle: Optional[bool] = False,
253
+ skip_parsing: Optional[bool] = False,
254
+ skip_weighting: Optional[bool] = False,
255
+ **kwargs,
256
+ ):
257
+ r"""
258
+ Prompts can be assigned with local weights using brackets. For example,
259
+ prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful',
260
+ and the embedding tokens corresponding to the words get multiplied by a constant, 1.1.
261
+
262
+ Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean.
263
+
264
+ Args:
265
+ pipe (`StableDiffusionPipeline`):
266
+ Pipe to provide access to the tokenizer and the text encoder.
267
+ prompt (`str` or `List[str]`):
268
+ The prompt or prompts to guide the image generation.
269
+ uncond_prompt (`str` or `List[str]`):
270
+ The unconditional prompt or prompts for guide the image generation. If unconditional prompt
271
+ is provided, the embeddings of prompt and uncond_prompt are concatenated.
272
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
273
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
274
+ no_boseos_middle (`bool`, *optional*, defaults to `False`):
275
+ If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and
276
+ ending token in each of the chunk in the middle.
277
+ skip_parsing (`bool`, *optional*, defaults to `False`):
278
+ Skip the parsing of brackets.
279
+ skip_weighting (`bool`, *optional*, defaults to `False`):
280
+ Skip the weighting. When the parsing is skipped, it is forced True.
281
+ """
282
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
283
+ if isinstance(prompt, str):
284
+ prompt = [prompt]
285
+
286
+ if not skip_parsing:
287
+ prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2)
288
+ if uncond_prompt is not None:
289
+ if isinstance(uncond_prompt, str):
290
+ uncond_prompt = [uncond_prompt]
291
+ uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2)
292
+ else:
293
+ prompt_tokens = [
294
+ token[1:-1] for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True).input_ids
295
+ ]
296
+ prompt_weights = [[1.0] * len(token) for token in prompt_tokens]
297
+ if uncond_prompt is not None:
298
+ if isinstance(uncond_prompt, str):
299
+ uncond_prompt = [uncond_prompt]
300
+ uncond_tokens = [
301
+ token[1:-1]
302
+ for token in pipe.tokenizer(uncond_prompt, max_length=max_length, truncation=True).input_ids
303
+ ]
304
+ uncond_weights = [[1.0] * len(token) for token in uncond_tokens]
305
+
306
+ # round up the longest length of tokens to a multiple of (model_max_length - 2)
307
+ max_length = max([len(token) for token in prompt_tokens])
308
+ if uncond_prompt is not None:
309
+ max_length = max(max_length, max([len(token) for token in uncond_tokens]))
310
+
311
+ max_embeddings_multiples = min(
312
+ max_embeddings_multiples,
313
+ (max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1,
314
+ )
315
+ max_embeddings_multiples = max(1, max_embeddings_multiples)
316
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
317
+
318
+ # pad the length of tokens and weights
319
+ bos = pipe.tokenizer.bos_token_id
320
+ eos = pipe.tokenizer.eos_token_id
321
+ prompt_tokens, prompt_weights = pad_tokens_and_weights(
322
+ prompt_tokens,
323
+ prompt_weights,
324
+ max_length,
325
+ bos,
326
+ eos,
327
+ no_boseos_middle=no_boseos_middle,
328
+ chunk_length=pipe.tokenizer.model_max_length,
329
+ )
330
+ prompt_tokens = torch.tensor(prompt_tokens, dtype=torch.long, device=pipe.device)
331
+ if uncond_prompt is not None:
332
+ uncond_tokens, uncond_weights = pad_tokens_and_weights(
333
+ uncond_tokens,
334
+ uncond_weights,
335
+ max_length,
336
+ bos,
337
+ eos,
338
+ no_boseos_middle=no_boseos_middle,
339
+ chunk_length=pipe.tokenizer.model_max_length,
340
+ )
341
+ uncond_tokens = torch.tensor(uncond_tokens, dtype=torch.long, device=pipe.device)
342
+
343
+ # get the embeddings
344
+ text_embeddings = get_unweighted_text_embeddings(
345
+ pipe,
346
+ prompt_tokens,
347
+ pipe.tokenizer.model_max_length,
348
+ no_boseos_middle=no_boseos_middle,
349
+ )
350
+ prompt_weights = torch.tensor(prompt_weights, dtype=text_embeddings.dtype, device=pipe.device)
351
+ if uncond_prompt is not None:
352
+ uncond_embeddings = get_unweighted_text_embeddings(
353
+ pipe,
354
+ uncond_tokens,
355
+ pipe.tokenizer.model_max_length,
356
+ no_boseos_middle=no_boseos_middle,
357
+ )
358
+ uncond_weights = torch.tensor(uncond_weights, dtype=uncond_embeddings.dtype, device=pipe.device)
359
+
360
+ # assign weights to the prompts and normalize in the sense of mean
361
+ # TODO: should we normalize by chunk or in a whole (current implementation)?
362
+ if (not skip_parsing) and (not skip_weighting):
363
+ previous_mean = text_embeddings.float().mean(axis=[-2, -1]).to(text_embeddings.dtype)
364
+ text_embeddings *= prompt_weights.unsqueeze(-1)
365
+ current_mean = text_embeddings.float().mean(axis=[-2, -1]).to(text_embeddings.dtype)
366
+ text_embeddings *= (previous_mean / current_mean).unsqueeze(-1).unsqueeze(-1)
367
+ if uncond_prompt is not None:
368
+ previous_mean = uncond_embeddings.float().mean(axis=[-2, -1]).to(uncond_embeddings.dtype)
369
+ uncond_embeddings *= uncond_weights.unsqueeze(-1)
370
+ current_mean = uncond_embeddings.float().mean(axis=[-2, -1]).to(uncond_embeddings.dtype)
371
+ uncond_embeddings *= (previous_mean / current_mean).unsqueeze(-1).unsqueeze(-1)
372
+
373
+ if uncond_prompt is not None:
374
+ return text_embeddings, uncond_embeddings
375
+ return text_embeddings, None
376
+
377
+
378
+ def preprocess_image(image):
379
+ w, h = image.size
380
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
381
+ image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
382
+ image = np.array(image).astype(np.float32) / 255.0
383
+ image = image[None].transpose(0, 3, 1, 2)
384
+ image = torch.from_numpy(image)
385
+ return 2.0 * image - 1.0
386
+
387
+
388
+ def preprocess_mask(mask, scale_factor=8):
389
+ mask = mask.convert("L")
390
+ w, h = mask.size
391
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
392
+ mask = mask.resize((w // scale_factor, h // scale_factor), resample=PIL_INTERPOLATION["nearest"])
393
+ mask = np.array(mask).astype(np.float32) / 255.0
394
+ mask = np.tile(mask, (4, 1, 1))
395
+ mask = mask[None].transpose(0, 1, 2, 3) # what does this step do?
396
+ mask = 1 - mask # repaint white, keep black
397
+ mask = torch.from_numpy(mask)
398
+ return mask
399
+
400
+
401
+ class StableDiffusionLongPromptWeightingPipeline(StableDiffusionPipeline):
402
+ r"""
403
+ Pipeline for text-to-image generation using Stable Diffusion without tokens length limit, and support parsing
404
+ weighting in prompt.
405
+
406
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
407
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
408
+
409
+ Args:
410
+ vae ([`AutoencoderKL`]):
411
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
412
+ text_encoder ([`CLIPTextModel`]):
413
+ Frozen text-encoder. Stable Diffusion uses the text portion of
414
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
415
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
416
+ tokenizer (`CLIPTokenizer`):
417
+ Tokenizer of class
418
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
419
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
420
+ scheduler ([`SchedulerMixin`]):
421
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
422
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
423
+ safety_checker ([`StableDiffusionSafetyChecker`]):
424
+ Classification module that estimates whether generated images could be considered offensive or harmful.
425
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
426
+ feature_extractor ([`CLIPFeatureExtractor`]):
427
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
428
+ """
429
+
430
+ if version.parse(version.parse(diffusers.__version__).base_version) >= version.parse("0.9.0"):
431
+
432
+ def __init__(
433
+ self,
434
+ vae: AutoencoderKL,
435
+ text_encoder: CLIPTextModel,
436
+ tokenizer: CLIPTokenizer,
437
+ unet: UNet2DConditionModel,
438
+ scheduler: SchedulerMixin,
439
+ safety_checker: StableDiffusionSafetyChecker,
440
+ feature_extractor: CLIPFeatureExtractor,
441
+ requires_safety_checker: bool = True,
442
+ ):
443
+ super().__init__(
444
+ vae=vae,
445
+ text_encoder=text_encoder,
446
+ tokenizer=tokenizer,
447
+ unet=unet,
448
+ scheduler=scheduler,
449
+ safety_checker=safety_checker,
450
+ feature_extractor=feature_extractor,
451
+ requires_safety_checker=requires_safety_checker,
452
+ )
453
+ self.__init__additional__()
454
+
455
+ else:
456
+
457
+ def __init__(
458
+ self,
459
+ vae: AutoencoderKL,
460
+ text_encoder: CLIPTextModel,
461
+ tokenizer: CLIPTokenizer,
462
+ unet: UNet2DConditionModel,
463
+ scheduler: SchedulerMixin,
464
+ safety_checker: StableDiffusionSafetyChecker,
465
+ feature_extractor: CLIPFeatureExtractor,
466
+ ):
467
+ super().__init__(
468
+ vae=vae,
469
+ text_encoder=text_encoder,
470
+ tokenizer=tokenizer,
471
+ unet=unet,
472
+ scheduler=scheduler,
473
+ safety_checker=safety_checker,
474
+ feature_extractor=feature_extractor,
475
+ )
476
+ self.__init__additional__()
477
+
478
+ def __init__additional__(self):
479
+ if not hasattr(self, "vae_scale_factor"):
480
+ setattr(self, "vae_scale_factor", 2 ** (len(self.vae.config.block_out_channels) - 1))
481
+
482
+ @property
483
+ def _execution_device(self):
484
+ r"""
485
+ Returns the device on which the pipeline's models will be executed. After calling
486
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
487
+ hooks.
488
+ """
489
+ if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
490
+ return self.device
491
+ for module in self.unet.modules():
492
+ if (
493
+ hasattr(module, "_hf_hook")
494
+ and hasattr(module._hf_hook, "execution_device")
495
+ and module._hf_hook.execution_device is not None
496
+ ):
497
+ return torch.device(module._hf_hook.execution_device)
498
+ return self.device
499
+
500
+ def _encode_prompt(
501
+ self,
502
+ prompt,
503
+ device,
504
+ num_images_per_prompt,
505
+ do_classifier_free_guidance,
506
+ negative_prompt,
507
+ max_embeddings_multiples,
508
+ ):
509
+ r"""
510
+ Encodes the prompt into text encoder hidden states.
511
+
512
+ Args:
513
+ prompt (`str` or `list(int)`):
514
+ prompt to be encoded
515
+ device: (`torch.device`):
516
+ torch device
517
+ num_images_per_prompt (`int`):
518
+ number of images that should be generated per prompt
519
+ do_classifier_free_guidance (`bool`):
520
+ whether to use classifier free guidance or not
521
+ negative_prompt (`str` or `List[str]`):
522
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
523
+ if `guidance_scale` is less than `1`).
524
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
525
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
526
+ """
527
+ batch_size = len(prompt) if isinstance(prompt, list) else 1
528
+
529
+ if negative_prompt is None:
530
+ negative_prompt = [""] * batch_size
531
+ elif isinstance(negative_prompt, str):
532
+ negative_prompt = [negative_prompt] * batch_size
533
+ if batch_size != len(negative_prompt):
534
+ raise ValueError(
535
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
536
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
537
+ " the batch size of `prompt`."
538
+ )
539
+
540
+ text_embeddings, uncond_embeddings = get_weighted_text_embeddings(
541
+ pipe=self,
542
+ prompt=prompt,
543
+ uncond_prompt=negative_prompt if do_classifier_free_guidance else None,
544
+ max_embeddings_multiples=max_embeddings_multiples,
545
+ )
546
+ bs_embed, seq_len, _ = text_embeddings.shape
547
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
548
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
549
+
550
+ if do_classifier_free_guidance:
551
+ bs_embed, seq_len, _ = uncond_embeddings.shape
552
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
553
+ uncond_embeddings = uncond_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
554
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
555
+
556
+ return text_embeddings
557
+
558
+ def check_inputs(self, prompt, height, width, strength, callback_steps):
559
+ if not isinstance(prompt, str) and not isinstance(prompt, list):
560
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
561
+
562
+ if strength < 0 or strength > 1:
563
+ raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
564
+
565
+ if height % 8 != 0 or width % 8 != 0:
566
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
567
+
568
+ if (callback_steps is None) or (
569
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
570
+ ):
571
+ raise ValueError(
572
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
573
+ f" {type(callback_steps)}."
574
+ )
575
+
576
+ def get_timesteps(self, num_inference_steps, strength, device, is_text2img):
577
+ if is_text2img:
578
+ return self.scheduler.timesteps.to(device), num_inference_steps
579
+ else:
580
+ # get the original timestep using init_timestep
581
+ offset = self.scheduler.config.get("steps_offset", 0)
582
+ init_timestep = int(num_inference_steps * strength) + offset
583
+ init_timestep = min(init_timestep, num_inference_steps)
584
+
585
+ t_start = max(num_inference_steps - init_timestep + offset, 0)
586
+ timesteps = self.scheduler.timesteps[t_start:].to(device)
587
+ return timesteps, num_inference_steps - t_start
588
+
589
+ def run_safety_checker(self, image, device, dtype):
590
+ if self.safety_checker is not None:
591
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
592
+ image, has_nsfw_concept = self.safety_checker(
593
+ images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
594
+ )
595
+ else:
596
+ has_nsfw_concept = None
597
+ return image, has_nsfw_concept
598
+
599
+ def decode_latents(self, latents):
600
+ latents = 1 / 0.18215 * latents
601
+ image = self.vae.decode(latents).sample
602
+ image = (image / 2 + 0.5).clamp(0, 1)
603
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
604
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
605
+ return image
606
+
607
+ def prepare_extra_step_kwargs(self, generator, eta):
608
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
609
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
610
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
611
+ # and should be between [0, 1]
612
+
613
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
614
+ extra_step_kwargs = {}
615
+ if accepts_eta:
616
+ extra_step_kwargs["eta"] = eta
617
+
618
+ # check if the scheduler accepts generator
619
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
620
+ if accepts_generator:
621
+ extra_step_kwargs["generator"] = generator
622
+ return extra_step_kwargs
623
+
624
+ def prepare_latents(self, image, timestep, batch_size, height, width, dtype, device, generator, latents=None):
625
+ if image is None:
626
+ shape = (
627
+ batch_size,
628
+ self.unet.in_channels,
629
+ height // self.vae_scale_factor,
630
+ width // self.vae_scale_factor,
631
+ )
632
+
633
+ if latents is None:
634
+ if device.type == "mps":
635
+ # randn does not work reproducibly on mps
636
+ latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device)
637
+ else:
638
+ latents = torch.randn(shape, generator=generator, device=device, dtype=dtype)
639
+ else:
640
+ if latents.shape != shape:
641
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
642
+ latents = latents.to(device)
643
+
644
+ # scale the initial noise by the standard deviation required by the scheduler
645
+ latents = latents * self.scheduler.init_noise_sigma
646
+ return latents, None, None
647
+ else:
648
+ init_latent_dist = self.vae.encode(image).latent_dist
649
+ init_latents = init_latent_dist.sample(generator=generator)
650
+ init_latents = 0.18215 * init_latents
651
+ init_latents = torch.cat([init_latents] * batch_size, dim=0)
652
+ init_latents_orig = init_latents
653
+ shape = init_latents.shape
654
+
655
+ # add noise to latents using the timesteps
656
+ if device.type == "mps":
657
+ noise = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device)
658
+ else:
659
+ noise = torch.randn(shape, generator=generator, device=device, dtype=dtype)
660
+ latents = self.scheduler.add_noise(init_latents, noise, timestep)
661
+ return latents, init_latents_orig, noise
662
+
663
+ @torch.no_grad()
664
+ def __call__(
665
+ self,
666
+ prompt: Union[str, List[str]],
667
+ negative_prompt: Optional[Union[str, List[str]]] = None,
668
+ image: Union[torch.FloatTensor, PIL.Image.Image] = None,
669
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
670
+ height: int = 512,
671
+ width: int = 512,
672
+ num_inference_steps: int = 50,
673
+ guidance_scale: float = 7.5,
674
+ strength: float = 0.8,
675
+ num_images_per_prompt: Optional[int] = 1,
676
+ eta: float = 0.0,
677
+ generator: Optional[torch.Generator] = None,
678
+ latents: Optional[torch.FloatTensor] = None,
679
+ max_embeddings_multiples: Optional[int] = 3,
680
+ output_type: Optional[str] = "pil",
681
+ return_dict: bool = True,
682
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
683
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
684
+ callback_steps: Optional[int] = 1,
685
+ **kwargs,
686
+ ):
687
+ r"""
688
+ Function invoked when calling the pipeline for generation.
689
+
690
+ Args:
691
+ prompt (`str` or `List[str]`):
692
+ The prompt or prompts to guide the image generation.
693
+ negative_prompt (`str` or `List[str]`, *optional*):
694
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
695
+ if `guidance_scale` is less than `1`).
696
+ image (`torch.FloatTensor` or `PIL.Image.Image`):
697
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
698
+ process.
699
+ mask_image (`torch.FloatTensor` or `PIL.Image.Image`):
700
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
701
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
702
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
703
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
704
+ height (`int`, *optional*, defaults to 512):
705
+ The height in pixels of the generated image.
706
+ width (`int`, *optional*, defaults to 512):
707
+ The width in pixels of the generated image.
708
+ num_inference_steps (`int`, *optional*, defaults to 50):
709
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
710
+ expense of slower inference.
711
+ guidance_scale (`float`, *optional*, defaults to 7.5):
712
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
713
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
714
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
715
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
716
+ usually at the expense of lower image quality.
717
+ strength (`float`, *optional*, defaults to 0.8):
718
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
719
+ `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
720
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
721
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
722
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
723
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
724
+ The number of images to generate per prompt.
725
+ eta (`float`, *optional*, defaults to 0.0):
726
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
727
+ [`schedulers.DDIMScheduler`], will be ignored for others.
728
+ generator (`torch.Generator`, *optional*):
729
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
730
+ deterministic.
731
+ latents (`torch.FloatTensor`, *optional*):
732
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
733
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
734
+ tensor will ge generated by sampling using the supplied random `generator`.
735
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
736
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
737
+ output_type (`str`, *optional*, defaults to `"pil"`):
738
+ The output format of the generate image. Choose between
739
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
740
+ return_dict (`bool`, *optional*, defaults to `True`):
741
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
742
+ plain tuple.
743
+ callback (`Callable`, *optional*):
744
+ A function that will be called every `callback_steps` steps during inference. The function will be
745
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
746
+ is_cancelled_callback (`Callable`, *optional*):
747
+ A function that will be called every `callback_steps` steps during inference. If the function returns
748
+ `True`, the inference will be cancelled.
749
+ callback_steps (`int`, *optional*, defaults to 1):
750
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
751
+ called at every step.
752
+
753
+ Returns:
754
+ `None` if cancelled by `is_cancelled_callback`,
755
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
756
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
757
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
758
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
759
+ (nsfw) content, according to the `safety_checker`.
760
+ """
761
+ message = "Please use `image` instead of `init_image`."
762
+ init_image = deprecate("init_image", "0.12.0", message, take_from=kwargs)
763
+ image = init_image or image
764
+
765
+ # 0. Default height and width to unet
766
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
767
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
768
+
769
+ # 1. Check inputs. Raise error if not correct
770
+ self.check_inputs(prompt, height, width, strength, callback_steps)
771
+
772
+ # 2. Define call parameters
773
+ batch_size = 1 if isinstance(prompt, str) else len(prompt)
774
+ device = self._execution_device
775
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
776
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
777
+ # corresponds to doing no classifier free guidance.
778
+ do_classifier_free_guidance = guidance_scale > 1.0
779
+
780
+ # 3. Encode input prompt
781
+ text_embeddings = self._encode_prompt(
782
+ prompt,
783
+ device,
784
+ num_images_per_prompt,
785
+ do_classifier_free_guidance,
786
+ negative_prompt,
787
+ max_embeddings_multiples,
788
+ )
789
+ dtype = text_embeddings.dtype
790
+
791
+ # 4. Preprocess image and mask
792
+ if isinstance(image, PIL.Image.Image):
793
+ image = preprocess_image(image)
794
+ if image is not None:
795
+ image = image.to(device=self.device, dtype=dtype)
796
+ if isinstance(mask_image, PIL.Image.Image):
797
+ mask_image = preprocess_mask(mask_image, self.vae_scale_factor)
798
+ if mask_image is not None:
799
+ mask = mask_image.to(device=self.device, dtype=dtype)
800
+ mask = torch.cat([mask] * batch_size * num_images_per_prompt)
801
+ else:
802
+ mask = None
803
+
804
+ # 5. set timesteps
805
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
806
+ timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device, image is None)
807
+ latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
808
+
809
+ # 6. Prepare latent variables
810
+ latents, init_latents_orig, noise = self.prepare_latents(
811
+ image,
812
+ latent_timestep,
813
+ batch_size * num_images_per_prompt,
814
+ height,
815
+ width,
816
+ dtype,
817
+ device,
818
+ generator,
819
+ latents,
820
+ )
821
+
822
+ # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
823
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
824
+
825
+ # 8. Denoising loop
826
+ for i, t in enumerate(self.progress_bar(timesteps)):
827
+ # expand the latents if we are doing classifier free guidance
828
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
829
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
830
+
831
+ # predict the noise residual
832
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
833
+
834
+ # perform guidance
835
+ if do_classifier_free_guidance:
836
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
837
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
838
+
839
+ # compute the previous noisy sample x_t -> x_t-1
840
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
841
+
842
+ if mask is not None:
843
+ # masking
844
+ init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, torch.tensor([t]))
845
+ latents = (init_latents_proper * mask) + (latents * (1 - mask))
846
+
847
+ # call the callback, if provided
848
+ if i % callback_steps == 0:
849
+ if callback is not None:
850
+ callback(i, t, latents)
851
+ if is_cancelled_callback is not None and is_cancelled_callback():
852
+ return None
853
+
854
+ # 9. Post-processing
855
+ image = self.decode_latents(latents)
856
+
857
+ # 10. Run safety checker
858
+ image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype)
859
+
860
+ # 11. Convert to PIL
861
+ if output_type == "pil":
862
+ image = self.numpy_to_pil(image)
863
+
864
+ if not return_dict:
865
+ return image, has_nsfw_concept
866
+
867
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
868
+
869
+ def text2img(
870
+ self,
871
+ prompt: Union[str, List[str]],
872
+ negative_prompt: Optional[Union[str, List[str]]] = None,
873
+ height: int = 512,
874
+ width: int = 512,
875
+ num_inference_steps: int = 50,
876
+ guidance_scale: float = 7.5,
877
+ num_images_per_prompt: Optional[int] = 1,
878
+ eta: float = 0.0,
879
+ generator: Optional[torch.Generator] = None,
880
+ latents: Optional[torch.FloatTensor] = None,
881
+ max_embeddings_multiples: Optional[int] = 3,
882
+ output_type: Optional[str] = "pil",
883
+ return_dict: bool = True,
884
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
885
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
886
+ callback_steps: Optional[int] = 1,
887
+ **kwargs,
888
+ ):
889
+ r"""
890
+ Function for text-to-image generation.
891
+ Args:
892
+ prompt (`str` or `List[str]`):
893
+ The prompt or prompts to guide the image generation.
894
+ negative_prompt (`str` or `List[str]`, *optional*):
895
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
896
+ if `guidance_scale` is less than `1`).
897
+ height (`int`, *optional*, defaults to 512):
898
+ The height in pixels of the generated image.
899
+ width (`int`, *optional*, defaults to 512):
900
+ The width in pixels of the generated image.
901
+ num_inference_steps (`int`, *optional*, defaults to 50):
902
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
903
+ expense of slower inference.
904
+ guidance_scale (`float`, *optional*, defaults to 7.5):
905
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
906
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
907
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
908
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
909
+ usually at the expense of lower image quality.
910
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
911
+ The number of images to generate per prompt.
912
+ eta (`float`, *optional*, defaults to 0.0):
913
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
914
+ [`schedulers.DDIMScheduler`], will be ignored for others.
915
+ generator (`torch.Generator`, *optional*):
916
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
917
+ deterministic.
918
+ latents (`torch.FloatTensor`, *optional*):
919
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
920
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
921
+ tensor will ge generated by sampling using the supplied random `generator`.
922
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
923
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
924
+ output_type (`str`, *optional*, defaults to `"pil"`):
925
+ The output format of the generate image. Choose between
926
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
927
+ return_dict (`bool`, *optional*, defaults to `True`):
928
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
929
+ plain tuple.
930
+ callback (`Callable`, *optional*):
931
+ A function that will be called every `callback_steps` steps during inference. The function will be
932
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
933
+ is_cancelled_callback (`Callable`, *optional*):
934
+ A function that will be called every `callback_steps` steps during inference. If the function returns
935
+ `True`, the inference will be cancelled.
936
+ callback_steps (`int`, *optional*, defaults to 1):
937
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
938
+ called at every step.
939
+ Returns:
940
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
941
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
942
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
943
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
944
+ (nsfw) content, according to the `safety_checker`.
945
+ """
946
+ return self.__call__(
947
+ prompt=prompt,
948
+ negative_prompt=negative_prompt,
949
+ height=height,
950
+ width=width,
951
+ num_inference_steps=num_inference_steps,
952
+ guidance_scale=guidance_scale,
953
+ num_images_per_prompt=num_images_per_prompt,
954
+ eta=eta,
955
+ generator=generator,
956
+ latents=latents,
957
+ max_embeddings_multiples=max_embeddings_multiples,
958
+ output_type=output_type,
959
+ return_dict=return_dict,
960
+ callback=callback,
961
+ is_cancelled_callback=is_cancelled_callback,
962
+ callback_steps=callback_steps,
963
+ **kwargs,
964
+ )
965
+
966
+ def img2img(
967
+ self,
968
+ image: Union[torch.FloatTensor, PIL.Image.Image],
969
+ prompt: Union[str, List[str]],
970
+ negative_prompt: Optional[Union[str, List[str]]] = None,
971
+ strength: float = 0.8,
972
+ num_inference_steps: Optional[int] = 50,
973
+ guidance_scale: Optional[float] = 7.5,
974
+ num_images_per_prompt: Optional[int] = 1,
975
+ eta: Optional[float] = 0.0,
976
+ generator: Optional[torch.Generator] = None,
977
+ max_embeddings_multiples: Optional[int] = 3,
978
+ output_type: Optional[str] = "pil",
979
+ return_dict: bool = True,
980
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
981
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
982
+ callback_steps: Optional[int] = 1,
983
+ **kwargs,
984
+ ):
985
+ r"""
986
+ Function for image-to-image generation.
987
+ Args:
988
+ image (`torch.FloatTensor` or `PIL.Image.Image`):
989
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
990
+ process.
991
+ prompt (`str` or `List[str]`):
992
+ The prompt or prompts to guide the image generation.
993
+ negative_prompt (`str` or `List[str]`, *optional*):
994
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
995
+ if `guidance_scale` is less than `1`).
996
+ strength (`float`, *optional*, defaults to 0.8):
997
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
998
+ `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
999
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
1000
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
1001
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
1002
+ num_inference_steps (`int`, *optional*, defaults to 50):
1003
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
1004
+ expense of slower inference. This parameter will be modulated by `strength`.
1005
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1006
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1007
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1008
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1009
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1010
+ usually at the expense of lower image quality.
1011
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1012
+ The number of images to generate per prompt.
1013
+ eta (`float`, *optional*, defaults to 0.0):
1014
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1015
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1016
+ generator (`torch.Generator`, *optional*):
1017
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
1018
+ deterministic.
1019
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1020
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1021
+ output_type (`str`, *optional*, defaults to `"pil"`):
1022
+ The output format of the generate image. Choose between
1023
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1024
+ return_dict (`bool`, *optional*, defaults to `True`):
1025
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1026
+ plain tuple.
1027
+ callback (`Callable`, *optional*):
1028
+ A function that will be called every `callback_steps` steps during inference. The function will be
1029
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
1030
+ is_cancelled_callback (`Callable`, *optional*):
1031
+ A function that will be called every `callback_steps` steps during inference. If the function returns
1032
+ `True`, the inference will be cancelled.
1033
+ callback_steps (`int`, *optional*, defaults to 1):
1034
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1035
+ called at every step.
1036
+ Returns:
1037
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1038
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1039
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1040
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1041
+ (nsfw) content, according to the `safety_checker`.
1042
+ """
1043
+ return self.__call__(
1044
+ prompt=prompt,
1045
+ negative_prompt=negative_prompt,
1046
+ image=image,
1047
+ num_inference_steps=num_inference_steps,
1048
+ guidance_scale=guidance_scale,
1049
+ strength=strength,
1050
+ num_images_per_prompt=num_images_per_prompt,
1051
+ eta=eta,
1052
+ generator=generator,
1053
+ max_embeddings_multiples=max_embeddings_multiples,
1054
+ output_type=output_type,
1055
+ return_dict=return_dict,
1056
+ callback=callback,
1057
+ is_cancelled_callback=is_cancelled_callback,
1058
+ callback_steps=callback_steps,
1059
+ **kwargs,
1060
+ )
1061
+
1062
+ def inpaint(
1063
+ self,
1064
+ image: Union[torch.FloatTensor, PIL.Image.Image],
1065
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image],
1066
+ prompt: Union[str, List[str]],
1067
+ negative_prompt: Optional[Union[str, List[str]]] = None,
1068
+ strength: float = 0.8,
1069
+ num_inference_steps: Optional[int] = 50,
1070
+ guidance_scale: Optional[float] = 7.5,
1071
+ num_images_per_prompt: Optional[int] = 1,
1072
+ eta: Optional[float] = 0.0,
1073
+ generator: Optional[torch.Generator] = None,
1074
+ max_embeddings_multiples: Optional[int] = 3,
1075
+ output_type: Optional[str] = "pil",
1076
+ return_dict: bool = True,
1077
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
1078
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
1079
+ callback_steps: Optional[int] = 1,
1080
+ **kwargs,
1081
+ ):
1082
+ r"""
1083
+ Function for inpaint.
1084
+ Args:
1085
+ image (`torch.FloatTensor` or `PIL.Image.Image`):
1086
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
1087
+ process. This is the image whose masked region will be inpainted.
1088
+ mask_image (`torch.FloatTensor` or `PIL.Image.Image`):
1089
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
1090
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
1091
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
1092
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
1093
+ prompt (`str` or `List[str]`):
1094
+ The prompt or prompts to guide the image generation.
1095
+ negative_prompt (`str` or `List[str]`, *optional*):
1096
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
1097
+ if `guidance_scale` is less than `1`).
1098
+ strength (`float`, *optional*, defaults to 0.8):
1099
+ Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength`
1100
+ is 1, the denoising process will be run on the masked area for the full number of iterations specified
1101
+ in `num_inference_steps`. `image` will be used as a reference for the masked area, adding more
1102
+ noise to that region the larger the `strength`. If `strength` is 0, no inpainting will occur.
1103
+ num_inference_steps (`int`, *optional*, defaults to 50):
1104
+ The reference number of denoising steps. More denoising steps usually lead to a higher quality image at
1105
+ the expense of slower inference. This parameter will be modulated by `strength`, as explained above.
1106
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1107
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1108
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1109
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1110
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1111
+ usually at the expense of lower image quality.
1112
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1113
+ The number of images to generate per prompt.
1114
+ eta (`float`, *optional*, defaults to 0.0):
1115
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1116
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1117
+ generator (`torch.Generator`, *optional*):
1118
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
1119
+ deterministic.
1120
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1121
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1122
+ output_type (`str`, *optional*, defaults to `"pil"`):
1123
+ The output format of the generate image. Choose between
1124
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1125
+ return_dict (`bool`, *optional*, defaults to `True`):
1126
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1127
+ plain tuple.
1128
+ callback (`Callable`, *optional*):
1129
+ A function that will be called every `callback_steps` steps during inference. The function will be
1130
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
1131
+ is_cancelled_callback (`Callable`, *optional*):
1132
+ A function that will be called every `callback_steps` steps during inference. If the function returns
1133
+ `True`, the inference will be cancelled.
1134
+ callback_steps (`int`, *optional*, defaults to 1):
1135
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1136
+ called at every step.
1137
+ Returns:
1138
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1139
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1140
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1141
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1142
+ (nsfw) content, according to the `safety_checker`.
1143
+ """
1144
+ return self.__call__(
1145
+ prompt=prompt,
1146
+ negative_prompt=negative_prompt,
1147
+ image=image,
1148
+ mask_image=mask_image,
1149
+ num_inference_steps=num_inference_steps,
1150
+ guidance_scale=guidance_scale,
1151
+ strength=strength,
1152
+ num_images_per_prompt=num_images_per_prompt,
1153
+ eta=eta,
1154
+ generator=generator,
1155
+ max_embeddings_multiples=max_embeddings_multiples,
1156
+ output_type=output_type,
1157
+ return_dict=return_dict,
1158
+ callback=callback,
1159
+ is_cancelled_callback=is_cancelled_callback,
1160
+ callback_steps=callback_steps,
1161
+ **kwargs,
1162
+ )
v0.11.1/lpw_stable_diffusion_onnx.py ADDED
@@ -0,0 +1,1148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import re
3
+ from typing import Callable, List, Optional, Union
4
+
5
+ import numpy as np
6
+ import torch
7
+
8
+ import diffusers
9
+ import PIL
10
+ from diffusers import OnnxStableDiffusionPipeline, SchedulerMixin
11
+ from diffusers.onnx_utils import OnnxRuntimeModel
12
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
13
+ from diffusers.utils import deprecate, logging
14
+ from packaging import version
15
+ from transformers import CLIPFeatureExtractor, CLIPTokenizer
16
+
17
+
18
+ try:
19
+ from diffusers.onnx_utils import ORT_TO_NP_TYPE
20
+ except ImportError:
21
+ ORT_TO_NP_TYPE = {
22
+ "tensor(bool)": np.bool_,
23
+ "tensor(int8)": np.int8,
24
+ "tensor(uint8)": np.uint8,
25
+ "tensor(int16)": np.int16,
26
+ "tensor(uint16)": np.uint16,
27
+ "tensor(int32)": np.int32,
28
+ "tensor(uint32)": np.uint32,
29
+ "tensor(int64)": np.int64,
30
+ "tensor(uint64)": np.uint64,
31
+ "tensor(float16)": np.float16,
32
+ "tensor(float)": np.float32,
33
+ "tensor(double)": np.float64,
34
+ }
35
+
36
+ try:
37
+ from diffusers.utils import PIL_INTERPOLATION
38
+ except ImportError:
39
+ if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
40
+ PIL_INTERPOLATION = {
41
+ "linear": PIL.Image.Resampling.BILINEAR,
42
+ "bilinear": PIL.Image.Resampling.BILINEAR,
43
+ "bicubic": PIL.Image.Resampling.BICUBIC,
44
+ "lanczos": PIL.Image.Resampling.LANCZOS,
45
+ "nearest": PIL.Image.Resampling.NEAREST,
46
+ }
47
+ else:
48
+ PIL_INTERPOLATION = {
49
+ "linear": PIL.Image.LINEAR,
50
+ "bilinear": PIL.Image.BILINEAR,
51
+ "bicubic": PIL.Image.BICUBIC,
52
+ "lanczos": PIL.Image.LANCZOS,
53
+ "nearest": PIL.Image.NEAREST,
54
+ }
55
+ # ------------------------------------------------------------------------------
56
+
57
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
58
+
59
+ re_attention = re.compile(
60
+ r"""
61
+ \\\(|
62
+ \\\)|
63
+ \\\[|
64
+ \\]|
65
+ \\\\|
66
+ \\|
67
+ \(|
68
+ \[|
69
+ :([+-]?[.\d]+)\)|
70
+ \)|
71
+ ]|
72
+ [^\\()\[\]:]+|
73
+ :
74
+ """,
75
+ re.X,
76
+ )
77
+
78
+
79
+ def parse_prompt_attention(text):
80
+ """
81
+ Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
82
+ Accepted tokens are:
83
+ (abc) - increases attention to abc by a multiplier of 1.1
84
+ (abc:3.12) - increases attention to abc by a multiplier of 3.12
85
+ [abc] - decreases attention to abc by a multiplier of 1.1
86
+ \( - literal character '('
87
+ \[ - literal character '['
88
+ \) - literal character ')'
89
+ \] - literal character ']'
90
+ \\ - literal character '\'
91
+ anything else - just text
92
+ >>> parse_prompt_attention('normal text')
93
+ [['normal text', 1.0]]
94
+ >>> parse_prompt_attention('an (important) word')
95
+ [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
96
+ >>> parse_prompt_attention('(unbalanced')
97
+ [['unbalanced', 1.1]]
98
+ >>> parse_prompt_attention('\(literal\]')
99
+ [['(literal]', 1.0]]
100
+ >>> parse_prompt_attention('(unnecessary)(parens)')
101
+ [['unnecessaryparens', 1.1]]
102
+ >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
103
+ [['a ', 1.0],
104
+ ['house', 1.5730000000000004],
105
+ [' ', 1.1],
106
+ ['on', 1.0],
107
+ [' a ', 1.1],
108
+ ['hill', 0.55],
109
+ [', sun, ', 1.1],
110
+ ['sky', 1.4641000000000006],
111
+ ['.', 1.1]]
112
+ """
113
+
114
+ res = []
115
+ round_brackets = []
116
+ square_brackets = []
117
+
118
+ round_bracket_multiplier = 1.1
119
+ square_bracket_multiplier = 1 / 1.1
120
+
121
+ def multiply_range(start_position, multiplier):
122
+ for p in range(start_position, len(res)):
123
+ res[p][1] *= multiplier
124
+
125
+ for m in re_attention.finditer(text):
126
+ text = m.group(0)
127
+ weight = m.group(1)
128
+
129
+ if text.startswith("\\"):
130
+ res.append([text[1:], 1.0])
131
+ elif text == "(":
132
+ round_brackets.append(len(res))
133
+ elif text == "[":
134
+ square_brackets.append(len(res))
135
+ elif weight is not None and len(round_brackets) > 0:
136
+ multiply_range(round_brackets.pop(), float(weight))
137
+ elif text == ")" and len(round_brackets) > 0:
138
+ multiply_range(round_brackets.pop(), round_bracket_multiplier)
139
+ elif text == "]" and len(square_brackets) > 0:
140
+ multiply_range(square_brackets.pop(), square_bracket_multiplier)
141
+ else:
142
+ res.append([text, 1.0])
143
+
144
+ for pos in round_brackets:
145
+ multiply_range(pos, round_bracket_multiplier)
146
+
147
+ for pos in square_brackets:
148
+ multiply_range(pos, square_bracket_multiplier)
149
+
150
+ if len(res) == 0:
151
+ res = [["", 1.0]]
152
+
153
+ # merge runs of identical weights
154
+ i = 0
155
+ while i + 1 < len(res):
156
+ if res[i][1] == res[i + 1][1]:
157
+ res[i][0] += res[i + 1][0]
158
+ res.pop(i + 1)
159
+ else:
160
+ i += 1
161
+
162
+ return res
163
+
164
+
165
+ def get_prompts_with_weights(pipe, prompt: List[str], max_length: int):
166
+ r"""
167
+ Tokenize a list of prompts and return its tokens with weights of each token.
168
+
169
+ No padding, starting or ending token is included.
170
+ """
171
+ tokens = []
172
+ weights = []
173
+ truncated = False
174
+ for text in prompt:
175
+ texts_and_weights = parse_prompt_attention(text)
176
+ text_token = []
177
+ text_weight = []
178
+ for word, weight in texts_and_weights:
179
+ # tokenize and discard the starting and the ending token
180
+ token = pipe.tokenizer(word, return_tensors="np").input_ids[0, 1:-1]
181
+ text_token += list(token)
182
+ # copy the weight by length of token
183
+ text_weight += [weight] * len(token)
184
+ # stop if the text is too long (longer than truncation limit)
185
+ if len(text_token) > max_length:
186
+ truncated = True
187
+ break
188
+ # truncate
189
+ if len(text_token) > max_length:
190
+ truncated = True
191
+ text_token = text_token[:max_length]
192
+ text_weight = text_weight[:max_length]
193
+ tokens.append(text_token)
194
+ weights.append(text_weight)
195
+ if truncated:
196
+ logger.warning("Prompt was truncated. Try to shorten the prompt or increase max_embeddings_multiples")
197
+ return tokens, weights
198
+
199
+
200
+ def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, no_boseos_middle=True, chunk_length=77):
201
+ r"""
202
+ Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length.
203
+ """
204
+ max_embeddings_multiples = (max_length - 2) // (chunk_length - 2)
205
+ weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length
206
+ for i in range(len(tokens)):
207
+ tokens[i] = [bos] + tokens[i] + [eos] * (max_length - 1 - len(tokens[i]))
208
+ if no_boseos_middle:
209
+ weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i]))
210
+ else:
211
+ w = []
212
+ if len(weights[i]) == 0:
213
+ w = [1.0] * weights_length
214
+ else:
215
+ for j in range(max_embeddings_multiples):
216
+ w.append(1.0) # weight for starting token in this chunk
217
+ w += weights[i][j * (chunk_length - 2) : min(len(weights[i]), (j + 1) * (chunk_length - 2))]
218
+ w.append(1.0) # weight for ending token in this chunk
219
+ w += [1.0] * (weights_length - len(w))
220
+ weights[i] = w[:]
221
+
222
+ return tokens, weights
223
+
224
+
225
+ def get_unweighted_text_embeddings(
226
+ pipe,
227
+ text_input: np.array,
228
+ chunk_length: int,
229
+ no_boseos_middle: Optional[bool] = True,
230
+ ):
231
+ """
232
+ When the length of tokens is a multiple of the capacity of the text encoder,
233
+ it should be split into chunks and sent to the text encoder individually.
234
+ """
235
+ max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2)
236
+ if max_embeddings_multiples > 1:
237
+ text_embeddings = []
238
+ for i in range(max_embeddings_multiples):
239
+ # extract the i-th chunk
240
+ text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].copy()
241
+
242
+ # cover the head and the tail by the starting and the ending tokens
243
+ text_input_chunk[:, 0] = text_input[0, 0]
244
+ text_input_chunk[:, -1] = text_input[0, -1]
245
+
246
+ text_embedding = pipe.text_encoder(input_ids=text_input_chunk)[0]
247
+
248
+ if no_boseos_middle:
249
+ if i == 0:
250
+ # discard the ending token
251
+ text_embedding = text_embedding[:, :-1]
252
+ elif i == max_embeddings_multiples - 1:
253
+ # discard the starting token
254
+ text_embedding = text_embedding[:, 1:]
255
+ else:
256
+ # discard both starting and ending tokens
257
+ text_embedding = text_embedding[:, 1:-1]
258
+
259
+ text_embeddings.append(text_embedding)
260
+ text_embeddings = np.concatenate(text_embeddings, axis=1)
261
+ else:
262
+ text_embeddings = pipe.text_encoder(input_ids=text_input)[0]
263
+ return text_embeddings
264
+
265
+
266
+ def get_weighted_text_embeddings(
267
+ pipe,
268
+ prompt: Union[str, List[str]],
269
+ uncond_prompt: Optional[Union[str, List[str]]] = None,
270
+ max_embeddings_multiples: Optional[int] = 4,
271
+ no_boseos_middle: Optional[bool] = False,
272
+ skip_parsing: Optional[bool] = False,
273
+ skip_weighting: Optional[bool] = False,
274
+ **kwargs,
275
+ ):
276
+ r"""
277
+ Prompts can be assigned with local weights using brackets. For example,
278
+ prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful',
279
+ and the embedding tokens corresponding to the words get multiplied by a constant, 1.1.
280
+
281
+ Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean.
282
+
283
+ Args:
284
+ pipe (`OnnxStableDiffusionPipeline`):
285
+ Pipe to provide access to the tokenizer and the text encoder.
286
+ prompt (`str` or `List[str]`):
287
+ The prompt or prompts to guide the image generation.
288
+ uncond_prompt (`str` or `List[str]`):
289
+ The unconditional prompt or prompts for guide the image generation. If unconditional prompt
290
+ is provided, the embeddings of prompt and uncond_prompt are concatenated.
291
+ max_embeddings_multiples (`int`, *optional*, defaults to `1`):
292
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
293
+ no_boseos_middle (`bool`, *optional*, defaults to `False`):
294
+ If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and
295
+ ending token in each of the chunk in the middle.
296
+ skip_parsing (`bool`, *optional*, defaults to `False`):
297
+ Skip the parsing of brackets.
298
+ skip_weighting (`bool`, *optional*, defaults to `False`):
299
+ Skip the weighting. When the parsing is skipped, it is forced True.
300
+ """
301
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
302
+ if isinstance(prompt, str):
303
+ prompt = [prompt]
304
+
305
+ if not skip_parsing:
306
+ prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2)
307
+ if uncond_prompt is not None:
308
+ if isinstance(uncond_prompt, str):
309
+ uncond_prompt = [uncond_prompt]
310
+ uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2)
311
+ else:
312
+ prompt_tokens = [
313
+ token[1:-1]
314
+ for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True, return_tensors="np").input_ids
315
+ ]
316
+ prompt_weights = [[1.0] * len(token) for token in prompt_tokens]
317
+ if uncond_prompt is not None:
318
+ if isinstance(uncond_prompt, str):
319
+ uncond_prompt = [uncond_prompt]
320
+ uncond_tokens = [
321
+ token[1:-1]
322
+ for token in pipe.tokenizer(
323
+ uncond_prompt,
324
+ max_length=max_length,
325
+ truncation=True,
326
+ return_tensors="np",
327
+ ).input_ids
328
+ ]
329
+ uncond_weights = [[1.0] * len(token) for token in uncond_tokens]
330
+
331
+ # round up the longest length of tokens to a multiple of (model_max_length - 2)
332
+ max_length = max([len(token) for token in prompt_tokens])
333
+ if uncond_prompt is not None:
334
+ max_length = max(max_length, max([len(token) for token in uncond_tokens]))
335
+
336
+ max_embeddings_multiples = min(
337
+ max_embeddings_multiples,
338
+ (max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1,
339
+ )
340
+ max_embeddings_multiples = max(1, max_embeddings_multiples)
341
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
342
+
343
+ # pad the length of tokens and weights
344
+ bos = pipe.tokenizer.bos_token_id
345
+ eos = pipe.tokenizer.eos_token_id
346
+ prompt_tokens, prompt_weights = pad_tokens_and_weights(
347
+ prompt_tokens,
348
+ prompt_weights,
349
+ max_length,
350
+ bos,
351
+ eos,
352
+ no_boseos_middle=no_boseos_middle,
353
+ chunk_length=pipe.tokenizer.model_max_length,
354
+ )
355
+ prompt_tokens = np.array(prompt_tokens, dtype=np.int32)
356
+ if uncond_prompt is not None:
357
+ uncond_tokens, uncond_weights = pad_tokens_and_weights(
358
+ uncond_tokens,
359
+ uncond_weights,
360
+ max_length,
361
+ bos,
362
+ eos,
363
+ no_boseos_middle=no_boseos_middle,
364
+ chunk_length=pipe.tokenizer.model_max_length,
365
+ )
366
+ uncond_tokens = np.array(uncond_tokens, dtype=np.int32)
367
+
368
+ # get the embeddings
369
+ text_embeddings = get_unweighted_text_embeddings(
370
+ pipe,
371
+ prompt_tokens,
372
+ pipe.tokenizer.model_max_length,
373
+ no_boseos_middle=no_boseos_middle,
374
+ )
375
+ prompt_weights = np.array(prompt_weights, dtype=text_embeddings.dtype)
376
+ if uncond_prompt is not None:
377
+ uncond_embeddings = get_unweighted_text_embeddings(
378
+ pipe,
379
+ uncond_tokens,
380
+ pipe.tokenizer.model_max_length,
381
+ no_boseos_middle=no_boseos_middle,
382
+ )
383
+ uncond_weights = np.array(uncond_weights, dtype=uncond_embeddings.dtype)
384
+
385
+ # assign weights to the prompts and normalize in the sense of mean
386
+ # TODO: should we normalize by chunk or in a whole (current implementation)?
387
+ if (not skip_parsing) and (not skip_weighting):
388
+ previous_mean = text_embeddings.mean(axis=(-2, -1))
389
+ text_embeddings *= prompt_weights[:, :, None]
390
+ text_embeddings *= (previous_mean / text_embeddings.mean(axis=(-2, -1)))[:, None, None]
391
+ if uncond_prompt is not None:
392
+ previous_mean = uncond_embeddings.mean(axis=(-2, -1))
393
+ uncond_embeddings *= uncond_weights[:, :, None]
394
+ uncond_embeddings *= (previous_mean / uncond_embeddings.mean(axis=(-2, -1)))[:, None, None]
395
+
396
+ # For classifier free guidance, we need to do two forward passes.
397
+ # Here we concatenate the unconditional and text embeddings into a single batch
398
+ # to avoid doing two forward passes
399
+ if uncond_prompt is not None:
400
+ return text_embeddings, uncond_embeddings
401
+
402
+ return text_embeddings
403
+
404
+
405
+ def preprocess_image(image):
406
+ w, h = image.size
407
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
408
+ image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
409
+ image = np.array(image).astype(np.float32) / 255.0
410
+ image = image[None].transpose(0, 3, 1, 2)
411
+ return 2.0 * image - 1.0
412
+
413
+
414
+ def preprocess_mask(mask, scale_factor=8):
415
+ mask = mask.convert("L")
416
+ w, h = mask.size
417
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
418
+ mask = mask.resize((w // scale_factor, h // scale_factor), resample=PIL_INTERPOLATION["nearest"])
419
+ mask = np.array(mask).astype(np.float32) / 255.0
420
+ mask = np.tile(mask, (4, 1, 1))
421
+ mask = mask[None].transpose(0, 1, 2, 3) # what does this step do?
422
+ mask = 1 - mask # repaint white, keep black
423
+ return mask
424
+
425
+
426
+ class OnnxStableDiffusionLongPromptWeightingPipeline(OnnxStableDiffusionPipeline):
427
+ r"""
428
+ Pipeline for text-to-image generation using Stable Diffusion without tokens length limit, and support parsing
429
+ weighting in prompt.
430
+
431
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
432
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
433
+ """
434
+ if version.parse(version.parse(diffusers.__version__).base_version) >= version.parse("0.9.0"):
435
+
436
+ def __init__(
437
+ self,
438
+ vae_encoder: OnnxRuntimeModel,
439
+ vae_decoder: OnnxRuntimeModel,
440
+ text_encoder: OnnxRuntimeModel,
441
+ tokenizer: CLIPTokenizer,
442
+ unet: OnnxRuntimeModel,
443
+ scheduler: SchedulerMixin,
444
+ safety_checker: OnnxRuntimeModel,
445
+ feature_extractor: CLIPFeatureExtractor,
446
+ requires_safety_checker: bool = True,
447
+ ):
448
+ super().__init__(
449
+ vae_encoder=vae_encoder,
450
+ vae_decoder=vae_decoder,
451
+ text_encoder=text_encoder,
452
+ tokenizer=tokenizer,
453
+ unet=unet,
454
+ scheduler=scheduler,
455
+ safety_checker=safety_checker,
456
+ feature_extractor=feature_extractor,
457
+ requires_safety_checker=requires_safety_checker,
458
+ )
459
+ self.__init__additional__()
460
+
461
+ else:
462
+
463
+ def __init__(
464
+ self,
465
+ vae_encoder: OnnxRuntimeModel,
466
+ vae_decoder: OnnxRuntimeModel,
467
+ text_encoder: OnnxRuntimeModel,
468
+ tokenizer: CLIPTokenizer,
469
+ unet: OnnxRuntimeModel,
470
+ scheduler: SchedulerMixin,
471
+ safety_checker: OnnxRuntimeModel,
472
+ feature_extractor: CLIPFeatureExtractor,
473
+ ):
474
+ super().__init__(
475
+ vae_encoder=vae_encoder,
476
+ vae_decoder=vae_decoder,
477
+ text_encoder=text_encoder,
478
+ tokenizer=tokenizer,
479
+ unet=unet,
480
+ scheduler=scheduler,
481
+ safety_checker=safety_checker,
482
+ feature_extractor=feature_extractor,
483
+ )
484
+ self.__init__additional__()
485
+
486
+ def __init__additional__(self):
487
+ self.unet_in_channels = 4
488
+ self.vae_scale_factor = 8
489
+
490
+ def _encode_prompt(
491
+ self,
492
+ prompt,
493
+ num_images_per_prompt,
494
+ do_classifier_free_guidance,
495
+ negative_prompt,
496
+ max_embeddings_multiples,
497
+ ):
498
+ r"""
499
+ Encodes the prompt into text encoder hidden states.
500
+
501
+ Args:
502
+ prompt (`str` or `list(int)`):
503
+ prompt to be encoded
504
+ num_images_per_prompt (`int`):
505
+ number of images that should be generated per prompt
506
+ do_classifier_free_guidance (`bool`):
507
+ whether to use classifier free guidance or not
508
+ negative_prompt (`str` or `List[str]`):
509
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
510
+ if `guidance_scale` is less than `1`).
511
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
512
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
513
+ """
514
+ batch_size = len(prompt) if isinstance(prompt, list) else 1
515
+
516
+ if negative_prompt is None:
517
+ negative_prompt = [""] * batch_size
518
+ elif isinstance(negative_prompt, str):
519
+ negative_prompt = [negative_prompt] * batch_size
520
+ if batch_size != len(negative_prompt):
521
+ raise ValueError(
522
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
523
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
524
+ " the batch size of `prompt`."
525
+ )
526
+
527
+ text_embeddings, uncond_embeddings = get_weighted_text_embeddings(
528
+ pipe=self,
529
+ prompt=prompt,
530
+ uncond_prompt=negative_prompt if do_classifier_free_guidance else None,
531
+ max_embeddings_multiples=max_embeddings_multiples,
532
+ )
533
+
534
+ text_embeddings = text_embeddings.repeat(num_images_per_prompt, 0)
535
+ if do_classifier_free_guidance:
536
+ uncond_embeddings = uncond_embeddings.repeat(num_images_per_prompt, 0)
537
+ text_embeddings = np.concatenate([uncond_embeddings, text_embeddings])
538
+
539
+ return text_embeddings
540
+
541
+ def check_inputs(self, prompt, height, width, strength, callback_steps):
542
+ if not isinstance(prompt, str) and not isinstance(prompt, list):
543
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
544
+
545
+ if strength < 0 or strength > 1:
546
+ raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
547
+
548
+ if height % 8 != 0 or width % 8 != 0:
549
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
550
+
551
+ if (callback_steps is None) or (
552
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
553
+ ):
554
+ raise ValueError(
555
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
556
+ f" {type(callback_steps)}."
557
+ )
558
+
559
+ def get_timesteps(self, num_inference_steps, strength, is_text2img):
560
+ if is_text2img:
561
+ return self.scheduler.timesteps, num_inference_steps
562
+ else:
563
+ # get the original timestep using init_timestep
564
+ offset = self.scheduler.config.get("steps_offset", 0)
565
+ init_timestep = int(num_inference_steps * strength) + offset
566
+ init_timestep = min(init_timestep, num_inference_steps)
567
+
568
+ t_start = max(num_inference_steps - init_timestep + offset, 0)
569
+ timesteps = self.scheduler.timesteps[t_start:]
570
+ return timesteps, num_inference_steps - t_start
571
+
572
+ def run_safety_checker(self, image):
573
+ if self.safety_checker is not None:
574
+ safety_checker_input = self.feature_extractor(
575
+ self.numpy_to_pil(image), return_tensors="np"
576
+ ).pixel_values.astype(image.dtype)
577
+ # There will throw an error if use safety_checker directly and batchsize>1
578
+ images, has_nsfw_concept = [], []
579
+ for i in range(image.shape[0]):
580
+ image_i, has_nsfw_concept_i = self.safety_checker(
581
+ clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1]
582
+ )
583
+ images.append(image_i)
584
+ has_nsfw_concept.append(has_nsfw_concept_i[0])
585
+ image = np.concatenate(images)
586
+ else:
587
+ has_nsfw_concept = None
588
+ return image, has_nsfw_concept
589
+
590
+ def decode_latents(self, latents):
591
+ latents = 1 / 0.18215 * latents
592
+ # image = self.vae_decoder(latent_sample=latents)[0]
593
+ # it seems likes there is a strange result for using half-precision vae decoder if batchsize>1
594
+ image = np.concatenate(
595
+ [self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])]
596
+ )
597
+ image = np.clip(image / 2 + 0.5, 0, 1)
598
+ image = image.transpose((0, 2, 3, 1))
599
+ return image
600
+
601
+ def prepare_extra_step_kwargs(self, generator, eta):
602
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
603
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
604
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
605
+ # and should be between [0, 1]
606
+
607
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
608
+ extra_step_kwargs = {}
609
+ if accepts_eta:
610
+ extra_step_kwargs["eta"] = eta
611
+
612
+ # check if the scheduler accepts generator
613
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
614
+ if accepts_generator:
615
+ extra_step_kwargs["generator"] = generator
616
+ return extra_step_kwargs
617
+
618
+ def prepare_latents(self, image, timestep, batch_size, height, width, dtype, generator, latents=None):
619
+ if image is None:
620
+ shape = (
621
+ batch_size,
622
+ self.unet_in_channels,
623
+ height // self.vae_scale_factor,
624
+ width // self.vae_scale_factor,
625
+ )
626
+
627
+ if latents is None:
628
+ latents = torch.randn(shape, generator=generator, device="cpu").numpy().astype(dtype)
629
+ else:
630
+ if latents.shape != shape:
631
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
632
+
633
+ # scale the initial noise by the standard deviation required by the scheduler
634
+ latents = (torch.from_numpy(latents) * self.scheduler.init_noise_sigma).numpy()
635
+ return latents, None, None
636
+ else:
637
+ init_latents = self.vae_encoder(sample=image)[0]
638
+ init_latents = 0.18215 * init_latents
639
+ init_latents = np.concatenate([init_latents] * batch_size, axis=0)
640
+ init_latents_orig = init_latents
641
+ shape = init_latents.shape
642
+
643
+ # add noise to latents using the timesteps
644
+ noise = torch.randn(shape, generator=generator, device="cpu").numpy().astype(dtype)
645
+ latents = self.scheduler.add_noise(
646
+ torch.from_numpy(init_latents), torch.from_numpy(noise), timestep
647
+ ).numpy()
648
+ return latents, init_latents_orig, noise
649
+
650
+ @torch.no_grad()
651
+ def __call__(
652
+ self,
653
+ prompt: Union[str, List[str]],
654
+ negative_prompt: Optional[Union[str, List[str]]] = None,
655
+ image: Union[np.ndarray, PIL.Image.Image] = None,
656
+ mask_image: Union[np.ndarray, PIL.Image.Image] = None,
657
+ height: int = 512,
658
+ width: int = 512,
659
+ num_inference_steps: int = 50,
660
+ guidance_scale: float = 7.5,
661
+ strength: float = 0.8,
662
+ num_images_per_prompt: Optional[int] = 1,
663
+ eta: float = 0.0,
664
+ generator: Optional[torch.Generator] = None,
665
+ latents: Optional[np.ndarray] = None,
666
+ max_embeddings_multiples: Optional[int] = 3,
667
+ output_type: Optional[str] = "pil",
668
+ return_dict: bool = True,
669
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
670
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
671
+ callback_steps: Optional[int] = 1,
672
+ **kwargs,
673
+ ):
674
+ r"""
675
+ Function invoked when calling the pipeline for generation.
676
+
677
+ Args:
678
+ prompt (`str` or `List[str]`):
679
+ The prompt or prompts to guide the image generation.
680
+ negative_prompt (`str` or `List[str]`, *optional*):
681
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
682
+ if `guidance_scale` is less than `1`).
683
+ image (`np.ndarray` or `PIL.Image.Image`):
684
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
685
+ process.
686
+ mask_image (`np.ndarray` or `PIL.Image.Image`):
687
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
688
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
689
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
690
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
691
+ height (`int`, *optional*, defaults to 512):
692
+ The height in pixels of the generated image.
693
+ width (`int`, *optional*, defaults to 512):
694
+ The width in pixels of the generated image.
695
+ num_inference_steps (`int`, *optional*, defaults to 50):
696
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
697
+ expense of slower inference.
698
+ guidance_scale (`float`, *optional*, defaults to 7.5):
699
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
700
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
701
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
702
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
703
+ usually at the expense of lower image quality.
704
+ strength (`float`, *optional*, defaults to 0.8):
705
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
706
+ `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
707
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
708
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
709
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
710
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
711
+ The number of images to generate per prompt.
712
+ eta (`float`, *optional*, defaults to 0.0):
713
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
714
+ [`schedulers.DDIMScheduler`], will be ignored for others.
715
+ generator (`torch.Generator`, *optional*):
716
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
717
+ deterministic.
718
+ latents (`np.ndarray`, *optional*):
719
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
720
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
721
+ tensor will ge generated by sampling using the supplied random `generator`.
722
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
723
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
724
+ output_type (`str`, *optional*, defaults to `"pil"`):
725
+ The output format of the generate image. Choose between
726
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
727
+ return_dict (`bool`, *optional*, defaults to `True`):
728
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
729
+ plain tuple.
730
+ callback (`Callable`, *optional*):
731
+ A function that will be called every `callback_steps` steps during inference. The function will be
732
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
733
+ is_cancelled_callback (`Callable`, *optional*):
734
+ A function that will be called every `callback_steps` steps during inference. If the function returns
735
+ `True`, the inference will be cancelled.
736
+ callback_steps (`int`, *optional*, defaults to 1):
737
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
738
+ called at every step.
739
+
740
+ Returns:
741
+ `None` if cancelled by `is_cancelled_callback`,
742
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
743
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
744
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
745
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
746
+ (nsfw) content, according to the `safety_checker`.
747
+ """
748
+ message = "Please use `image` instead of `init_image`."
749
+ init_image = deprecate("init_image", "0.12.0", message, take_from=kwargs)
750
+ image = init_image or image
751
+
752
+ # 0. Default height and width to unet
753
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
754
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
755
+
756
+ # 1. Check inputs. Raise error if not correct
757
+ self.check_inputs(prompt, height, width, strength, callback_steps)
758
+
759
+ # 2. Define call parameters
760
+ batch_size = 1 if isinstance(prompt, str) else len(prompt)
761
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
762
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
763
+ # corresponds to doing no classifier free guidance.
764
+ do_classifier_free_guidance = guidance_scale > 1.0
765
+
766
+ # 3. Encode input prompt
767
+ text_embeddings = self._encode_prompt(
768
+ prompt,
769
+ num_images_per_prompt,
770
+ do_classifier_free_guidance,
771
+ negative_prompt,
772
+ max_embeddings_multiples,
773
+ )
774
+ dtype = text_embeddings.dtype
775
+
776
+ # 4. Preprocess image and mask
777
+ if isinstance(image, PIL.Image.Image):
778
+ image = preprocess_image(image)
779
+ if image is not None:
780
+ image = image.astype(dtype)
781
+ if isinstance(mask_image, PIL.Image.Image):
782
+ mask_image = preprocess_mask(mask_image, self.vae_scale_factor)
783
+ if mask_image is not None:
784
+ mask = mask_image.astype(dtype)
785
+ mask = np.concatenate([mask] * batch_size * num_images_per_prompt)
786
+ else:
787
+ mask = None
788
+
789
+ # 5. set timesteps
790
+ self.scheduler.set_timesteps(num_inference_steps)
791
+ timestep_dtype = next(
792
+ (input.type for input in self.unet.model.get_inputs() if input.name == "timestep"), "tensor(float)"
793
+ )
794
+ timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype]
795
+ timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, image is None)
796
+ latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
797
+
798
+ # 6. Prepare latent variables
799
+ latents, init_latents_orig, noise = self.prepare_latents(
800
+ image,
801
+ latent_timestep,
802
+ batch_size * num_images_per_prompt,
803
+ height,
804
+ width,
805
+ dtype,
806
+ generator,
807
+ latents,
808
+ )
809
+
810
+ # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
811
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
812
+
813
+ # 8. Denoising loop
814
+ for i, t in enumerate(self.progress_bar(timesteps)):
815
+ # expand the latents if we are doing classifier free guidance
816
+ latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents
817
+ latent_model_input = self.scheduler.scale_model_input(torch.from_numpy(latent_model_input), t)
818
+ latent_model_input = latent_model_input.numpy()
819
+
820
+ # predict the noise residual
821
+ noise_pred = self.unet(
822
+ sample=latent_model_input,
823
+ timestep=np.array([t], dtype=timestep_dtype),
824
+ encoder_hidden_states=text_embeddings,
825
+ )
826
+ noise_pred = noise_pred[0]
827
+
828
+ # perform guidance
829
+ if do_classifier_free_guidance:
830
+ noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2)
831
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
832
+
833
+ # compute the previous noisy sample x_t -> x_t-1
834
+ scheduler_output = self.scheduler.step(
835
+ torch.from_numpy(noise_pred), t, torch.from_numpy(latents), **extra_step_kwargs
836
+ )
837
+ latents = scheduler_output.prev_sample.numpy()
838
+
839
+ if mask is not None:
840
+ # masking
841
+ init_latents_proper = self.scheduler.add_noise(
842
+ torch.from_numpy(init_latents_orig),
843
+ torch.from_numpy(noise),
844
+ t,
845
+ ).numpy()
846
+ latents = (init_latents_proper * mask) + (latents * (1 - mask))
847
+
848
+ # call the callback, if provided
849
+ if i % callback_steps == 0:
850
+ if callback is not None:
851
+ callback(i, t, latents)
852
+ if is_cancelled_callback is not None and is_cancelled_callback():
853
+ return None
854
+
855
+ # 9. Post-processing
856
+ image = self.decode_latents(latents)
857
+
858
+ # 10. Run safety checker
859
+ image, has_nsfw_concept = self.run_safety_checker(image)
860
+
861
+ # 11. Convert to PIL
862
+ if output_type == "pil":
863
+ image = self.numpy_to_pil(image)
864
+
865
+ if not return_dict:
866
+ return image, has_nsfw_concept
867
+
868
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
869
+
870
+ def text2img(
871
+ self,
872
+ prompt: Union[str, List[str]],
873
+ negative_prompt: Optional[Union[str, List[str]]] = None,
874
+ height: int = 512,
875
+ width: int = 512,
876
+ num_inference_steps: int = 50,
877
+ guidance_scale: float = 7.5,
878
+ num_images_per_prompt: Optional[int] = 1,
879
+ eta: float = 0.0,
880
+ generator: Optional[torch.Generator] = None,
881
+ latents: Optional[np.ndarray] = None,
882
+ max_embeddings_multiples: Optional[int] = 3,
883
+ output_type: Optional[str] = "pil",
884
+ return_dict: bool = True,
885
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
886
+ callback_steps: Optional[int] = 1,
887
+ **kwargs,
888
+ ):
889
+ r"""
890
+ Function for text-to-image generation.
891
+ Args:
892
+ prompt (`str` or `List[str]`):
893
+ The prompt or prompts to guide the image generation.
894
+ negative_prompt (`str` or `List[str]`, *optional*):
895
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
896
+ if `guidance_scale` is less than `1`).
897
+ height (`int`, *optional*, defaults to 512):
898
+ The height in pixels of the generated image.
899
+ width (`int`, *optional*, defaults to 512):
900
+ The width in pixels of the generated image.
901
+ num_inference_steps (`int`, *optional*, defaults to 50):
902
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
903
+ expense of slower inference.
904
+ guidance_scale (`float`, *optional*, defaults to 7.5):
905
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
906
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
907
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
908
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
909
+ usually at the expense of lower image quality.
910
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
911
+ The number of images to generate per prompt.
912
+ eta (`float`, *optional*, defaults to 0.0):
913
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
914
+ [`schedulers.DDIMScheduler`], will be ignored for others.
915
+ generator (`torch.Generator`, *optional*):
916
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
917
+ deterministic.
918
+ latents (`np.ndarray`, *optional*):
919
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
920
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
921
+ tensor will ge generated by sampling using the supplied random `generator`.
922
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
923
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
924
+ output_type (`str`, *optional*, defaults to `"pil"`):
925
+ The output format of the generate image. Choose between
926
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
927
+ return_dict (`bool`, *optional*, defaults to `True`):
928
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
929
+ plain tuple.
930
+ callback (`Callable`, *optional*):
931
+ A function that will be called every `callback_steps` steps during inference. The function will be
932
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
933
+ callback_steps (`int`, *optional*, defaults to 1):
934
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
935
+ called at every step.
936
+ Returns:
937
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
938
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
939
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
940
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
941
+ (nsfw) content, according to the `safety_checker`.
942
+ """
943
+ return self.__call__(
944
+ prompt=prompt,
945
+ negative_prompt=negative_prompt,
946
+ height=height,
947
+ width=width,
948
+ num_inference_steps=num_inference_steps,
949
+ guidance_scale=guidance_scale,
950
+ num_images_per_prompt=num_images_per_prompt,
951
+ eta=eta,
952
+ generator=generator,
953
+ latents=latents,
954
+ max_embeddings_multiples=max_embeddings_multiples,
955
+ output_type=output_type,
956
+ return_dict=return_dict,
957
+ callback=callback,
958
+ callback_steps=callback_steps,
959
+ **kwargs,
960
+ )
961
+
962
+ def img2img(
963
+ self,
964
+ image: Union[np.ndarray, PIL.Image.Image],
965
+ prompt: Union[str, List[str]],
966
+ negative_prompt: Optional[Union[str, List[str]]] = None,
967
+ strength: float = 0.8,
968
+ num_inference_steps: Optional[int] = 50,
969
+ guidance_scale: Optional[float] = 7.5,
970
+ num_images_per_prompt: Optional[int] = 1,
971
+ eta: Optional[float] = 0.0,
972
+ generator: Optional[torch.Generator] = None,
973
+ max_embeddings_multiples: Optional[int] = 3,
974
+ output_type: Optional[str] = "pil",
975
+ return_dict: bool = True,
976
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
977
+ callback_steps: Optional[int] = 1,
978
+ **kwargs,
979
+ ):
980
+ r"""
981
+ Function for image-to-image generation.
982
+ Args:
983
+ image (`np.ndarray` or `PIL.Image.Image`):
984
+ `Image`, or ndarray representing an image batch, that will be used as the starting point for the
985
+ process.
986
+ prompt (`str` or `List[str]`):
987
+ The prompt or prompts to guide the image generation.
988
+ negative_prompt (`str` or `List[str]`, *optional*):
989
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
990
+ if `guidance_scale` is less than `1`).
991
+ strength (`float`, *optional*, defaults to 0.8):
992
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
993
+ `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
994
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
995
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
996
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
997
+ num_inference_steps (`int`, *optional*, defaults to 50):
998
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
999
+ expense of slower inference. This parameter will be modulated by `strength`.
1000
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1001
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1002
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1003
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1004
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1005
+ usually at the expense of lower image quality.
1006
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1007
+ The number of images to generate per prompt.
1008
+ eta (`float`, *optional*, defaults to 0.0):
1009
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1010
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1011
+ generator (`torch.Generator`, *optional*):
1012
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
1013
+ deterministic.
1014
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1015
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1016
+ output_type (`str`, *optional*, defaults to `"pil"`):
1017
+ The output format of the generate image. Choose between
1018
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1019
+ return_dict (`bool`, *optional*, defaults to `True`):
1020
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1021
+ plain tuple.
1022
+ callback (`Callable`, *optional*):
1023
+ A function that will be called every `callback_steps` steps during inference. The function will be
1024
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
1025
+ callback_steps (`int`, *optional*, defaults to 1):
1026
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1027
+ called at every step.
1028
+ Returns:
1029
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1030
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1031
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1032
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1033
+ (nsfw) content, according to the `safety_checker`.
1034
+ """
1035
+ return self.__call__(
1036
+ prompt=prompt,
1037
+ negative_prompt=negative_prompt,
1038
+ image=image,
1039
+ num_inference_steps=num_inference_steps,
1040
+ guidance_scale=guidance_scale,
1041
+ strength=strength,
1042
+ num_images_per_prompt=num_images_per_prompt,
1043
+ eta=eta,
1044
+ generator=generator,
1045
+ max_embeddings_multiples=max_embeddings_multiples,
1046
+ output_type=output_type,
1047
+ return_dict=return_dict,
1048
+ callback=callback,
1049
+ callback_steps=callback_steps,
1050
+ **kwargs,
1051
+ )
1052
+
1053
+ def inpaint(
1054
+ self,
1055
+ image: Union[np.ndarray, PIL.Image.Image],
1056
+ mask_image: Union[np.ndarray, PIL.Image.Image],
1057
+ prompt: Union[str, List[str]],
1058
+ negative_prompt: Optional[Union[str, List[str]]] = None,
1059
+ strength: float = 0.8,
1060
+ num_inference_steps: Optional[int] = 50,
1061
+ guidance_scale: Optional[float] = 7.5,
1062
+ num_images_per_prompt: Optional[int] = 1,
1063
+ eta: Optional[float] = 0.0,
1064
+ generator: Optional[torch.Generator] = None,
1065
+ max_embeddings_multiples: Optional[int] = 3,
1066
+ output_type: Optional[str] = "pil",
1067
+ return_dict: bool = True,
1068
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
1069
+ callback_steps: Optional[int] = 1,
1070
+ **kwargs,
1071
+ ):
1072
+ r"""
1073
+ Function for inpaint.
1074
+ Args:
1075
+ image (`np.ndarray` or `PIL.Image.Image`):
1076
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
1077
+ process. This is the image whose masked region will be inpainted.
1078
+ mask_image (`np.ndarray` or `PIL.Image.Image`):
1079
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
1080
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
1081
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
1082
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
1083
+ prompt (`str` or `List[str]`):
1084
+ The prompt or prompts to guide the image generation.
1085
+ negative_prompt (`str` or `List[str]`, *optional*):
1086
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
1087
+ if `guidance_scale` is less than `1`).
1088
+ strength (`float`, *optional*, defaults to 0.8):
1089
+ Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength`
1090
+ is 1, the denoising process will be run on the masked area for the full number of iterations specified
1091
+ in `num_inference_steps`. `image` will be used as a reference for the masked area, adding more
1092
+ noise to that region the larger the `strength`. If `strength` is 0, no inpainting will occur.
1093
+ num_inference_steps (`int`, *optional*, defaults to 50):
1094
+ The reference number of denoising steps. More denoising steps usually lead to a higher quality image at
1095
+ the expense of slower inference. This parameter will be modulated by `strength`, as explained above.
1096
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1097
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1098
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1099
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1100
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1101
+ usually at the expense of lower image quality.
1102
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1103
+ The number of images to generate per prompt.
1104
+ eta (`float`, *optional*, defaults to 0.0):
1105
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1106
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1107
+ generator (`torch.Generator`, *optional*):
1108
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
1109
+ deterministic.
1110
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1111
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1112
+ output_type (`str`, *optional*, defaults to `"pil"`):
1113
+ The output format of the generate image. Choose between
1114
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1115
+ return_dict (`bool`, *optional*, defaults to `True`):
1116
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1117
+ plain tuple.
1118
+ callback (`Callable`, *optional*):
1119
+ A function that will be called every `callback_steps` steps during inference. The function will be
1120
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
1121
+ callback_steps (`int`, *optional*, defaults to 1):
1122
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1123
+ called at every step.
1124
+ Returns:
1125
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1126
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1127
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1128
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1129
+ (nsfw) content, according to the `safety_checker`.
1130
+ """
1131
+ return self.__call__(
1132
+ prompt=prompt,
1133
+ negative_prompt=negative_prompt,
1134
+ image=image,
1135
+ mask_image=mask_image,
1136
+ num_inference_steps=num_inference_steps,
1137
+ guidance_scale=guidance_scale,
1138
+ strength=strength,
1139
+ num_images_per_prompt=num_images_per_prompt,
1140
+ eta=eta,
1141
+ generator=generator,
1142
+ max_embeddings_multiples=max_embeddings_multiples,
1143
+ output_type=output_type,
1144
+ return_dict=return_dict,
1145
+ callback=callback,
1146
+ callback_steps=callback_steps,
1147
+ **kwargs,
1148
+ )
v0.11.1/multilingual_stable_diffusion.py ADDED
@@ -0,0 +1,436 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import Callable, List, Optional, Union
3
+
4
+ import torch
5
+
6
+ from diffusers.configuration_utils import FrozenDict
7
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
8
+ from diffusers.pipeline_utils import DiffusionPipeline
9
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
10
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
11
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
12
+ from diffusers.utils import deprecate, logging
13
+ from transformers import (
14
+ CLIPFeatureExtractor,
15
+ CLIPTextModel,
16
+ CLIPTokenizer,
17
+ MBart50TokenizerFast,
18
+ MBartForConditionalGeneration,
19
+ pipeline,
20
+ )
21
+
22
+
23
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
24
+
25
+
26
+ def detect_language(pipe, prompt, batch_size):
27
+ """helper function to detect language(s) of prompt"""
28
+
29
+ if batch_size == 1:
30
+ preds = pipe(prompt, top_k=1, truncation=True, max_length=128)
31
+ return preds[0]["label"]
32
+ else:
33
+ detected_languages = []
34
+ for p in prompt:
35
+ preds = pipe(p, top_k=1, truncation=True, max_length=128)
36
+ detected_languages.append(preds[0]["label"])
37
+
38
+ return detected_languages
39
+
40
+
41
+ def translate_prompt(prompt, translation_tokenizer, translation_model, device):
42
+ """helper function to translate prompt to English"""
43
+
44
+ encoded_prompt = translation_tokenizer(prompt, return_tensors="pt").to(device)
45
+ generated_tokens = translation_model.generate(**encoded_prompt, max_new_tokens=1000)
46
+ en_trans = translation_tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
47
+
48
+ return en_trans[0]
49
+
50
+
51
+ class MultilingualStableDiffusion(DiffusionPipeline):
52
+ r"""
53
+ Pipeline for text-to-image generation using Stable Diffusion in different languages.
54
+
55
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
56
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
57
+
58
+ Args:
59
+ detection_pipeline ([`pipeline`]):
60
+ Transformers pipeline to detect prompt's language.
61
+ translation_model ([`MBartForConditionalGeneration`]):
62
+ Model to translate prompt to English, if necessary. Please refer to the
63
+ [model card](https://huggingface.co/docs/transformers/model_doc/mbart) for details.
64
+ translation_tokenizer ([`MBart50TokenizerFast`]):
65
+ Tokenizer of the translation model.
66
+ vae ([`AutoencoderKL`]):
67
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
68
+ text_encoder ([`CLIPTextModel`]):
69
+ Frozen text-encoder. Stable Diffusion uses the text portion of
70
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
71
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
72
+ tokenizer (`CLIPTokenizer`):
73
+ Tokenizer of class
74
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
75
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
76
+ scheduler ([`SchedulerMixin`]):
77
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
78
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
79
+ safety_checker ([`StableDiffusionSafetyChecker`]):
80
+ Classification module that estimates whether generated images could be considered offensive or harmful.
81
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
82
+ feature_extractor ([`CLIPFeatureExtractor`]):
83
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
84
+ """
85
+
86
+ def __init__(
87
+ self,
88
+ detection_pipeline: pipeline,
89
+ translation_model: MBartForConditionalGeneration,
90
+ translation_tokenizer: MBart50TokenizerFast,
91
+ vae: AutoencoderKL,
92
+ text_encoder: CLIPTextModel,
93
+ tokenizer: CLIPTokenizer,
94
+ unet: UNet2DConditionModel,
95
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
96
+ safety_checker: StableDiffusionSafetyChecker,
97
+ feature_extractor: CLIPFeatureExtractor,
98
+ ):
99
+ super().__init__()
100
+
101
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
102
+ deprecation_message = (
103
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
104
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
105
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
106
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
107
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
108
+ " file"
109
+ )
110
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
111
+ new_config = dict(scheduler.config)
112
+ new_config["steps_offset"] = 1
113
+ scheduler._internal_dict = FrozenDict(new_config)
114
+
115
+ if safety_checker is None:
116
+ logger.warning(
117
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
118
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
119
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
120
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
121
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
122
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
123
+ )
124
+
125
+ self.register_modules(
126
+ detection_pipeline=detection_pipeline,
127
+ translation_model=translation_model,
128
+ translation_tokenizer=translation_tokenizer,
129
+ vae=vae,
130
+ text_encoder=text_encoder,
131
+ tokenizer=tokenizer,
132
+ unet=unet,
133
+ scheduler=scheduler,
134
+ safety_checker=safety_checker,
135
+ feature_extractor=feature_extractor,
136
+ )
137
+
138
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
139
+ r"""
140
+ Enable sliced attention computation.
141
+
142
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
143
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
144
+
145
+ Args:
146
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
147
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
148
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
149
+ `attention_head_dim` must be a multiple of `slice_size`.
150
+ """
151
+ if slice_size == "auto":
152
+ # half the attention head size is usually a good trade-off between
153
+ # speed and memory
154
+ slice_size = self.unet.config.attention_head_dim // 2
155
+ self.unet.set_attention_slice(slice_size)
156
+
157
+ def disable_attention_slicing(self):
158
+ r"""
159
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
160
+ back to computing attention in one step.
161
+ """
162
+ # set slice_size = `None` to disable `attention slicing`
163
+ self.enable_attention_slicing(None)
164
+
165
+ @torch.no_grad()
166
+ def __call__(
167
+ self,
168
+ prompt: Union[str, List[str]],
169
+ height: int = 512,
170
+ width: int = 512,
171
+ num_inference_steps: int = 50,
172
+ guidance_scale: float = 7.5,
173
+ negative_prompt: Optional[Union[str, List[str]]] = None,
174
+ num_images_per_prompt: Optional[int] = 1,
175
+ eta: float = 0.0,
176
+ generator: Optional[torch.Generator] = None,
177
+ latents: Optional[torch.FloatTensor] = None,
178
+ output_type: Optional[str] = "pil",
179
+ return_dict: bool = True,
180
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
181
+ callback_steps: Optional[int] = 1,
182
+ **kwargs,
183
+ ):
184
+ r"""
185
+ Function invoked when calling the pipeline for generation.
186
+
187
+ Args:
188
+ prompt (`str` or `List[str]`):
189
+ The prompt or prompts to guide the image generation. Can be in different languages.
190
+ height (`int`, *optional*, defaults to 512):
191
+ The height in pixels of the generated image.
192
+ width (`int`, *optional*, defaults to 512):
193
+ The width in pixels of the generated image.
194
+ num_inference_steps (`int`, *optional*, defaults to 50):
195
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
196
+ expense of slower inference.
197
+ guidance_scale (`float`, *optional*, defaults to 7.5):
198
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
199
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
200
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
201
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
202
+ usually at the expense of lower image quality.
203
+ negative_prompt (`str` or `List[str]`, *optional*):
204
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
205
+ if `guidance_scale` is less than `1`).
206
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
207
+ The number of images to generate per prompt.
208
+ eta (`float`, *optional*, defaults to 0.0):
209
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
210
+ [`schedulers.DDIMScheduler`], will be ignored for others.
211
+ generator (`torch.Generator`, *optional*):
212
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
213
+ deterministic.
214
+ latents (`torch.FloatTensor`, *optional*):
215
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
216
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
217
+ tensor will ge generated by sampling using the supplied random `generator`.
218
+ output_type (`str`, *optional*, defaults to `"pil"`):
219
+ The output format of the generate image. Choose between
220
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
221
+ return_dict (`bool`, *optional*, defaults to `True`):
222
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
223
+ plain tuple.
224
+ callback (`Callable`, *optional*):
225
+ A function that will be called every `callback_steps` steps during inference. The function will be
226
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
227
+ callback_steps (`int`, *optional*, defaults to 1):
228
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
229
+ called at every step.
230
+
231
+ Returns:
232
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
233
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
234
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
235
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
236
+ (nsfw) content, according to the `safety_checker`.
237
+ """
238
+ if isinstance(prompt, str):
239
+ batch_size = 1
240
+ elif isinstance(prompt, list):
241
+ batch_size = len(prompt)
242
+ else:
243
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
244
+
245
+ if height % 8 != 0 or width % 8 != 0:
246
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
247
+
248
+ if (callback_steps is None) or (
249
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
250
+ ):
251
+ raise ValueError(
252
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
253
+ f" {type(callback_steps)}."
254
+ )
255
+
256
+ # detect language and translate if necessary
257
+ prompt_language = detect_language(self.detection_pipeline, prompt, batch_size)
258
+ if batch_size == 1 and prompt_language != "en":
259
+ prompt = translate_prompt(prompt, self.translation_tokenizer, self.translation_model, self.device)
260
+
261
+ if isinstance(prompt, list):
262
+ for index in range(batch_size):
263
+ if prompt_language[index] != "en":
264
+ p = translate_prompt(
265
+ prompt[index], self.translation_tokenizer, self.translation_model, self.device
266
+ )
267
+ prompt[index] = p
268
+
269
+ # get prompt text embeddings
270
+ text_inputs = self.tokenizer(
271
+ prompt,
272
+ padding="max_length",
273
+ max_length=self.tokenizer.model_max_length,
274
+ return_tensors="pt",
275
+ )
276
+ text_input_ids = text_inputs.input_ids
277
+
278
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
279
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
280
+ logger.warning(
281
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
282
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
283
+ )
284
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
285
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
286
+
287
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
288
+ bs_embed, seq_len, _ = text_embeddings.shape
289
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
290
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
291
+
292
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
293
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
294
+ # corresponds to doing no classifier free guidance.
295
+ do_classifier_free_guidance = guidance_scale > 1.0
296
+ # get unconditional embeddings for classifier free guidance
297
+ if do_classifier_free_guidance:
298
+ uncond_tokens: List[str]
299
+ if negative_prompt is None:
300
+ uncond_tokens = [""] * batch_size
301
+ elif type(prompt) is not type(negative_prompt):
302
+ raise TypeError(
303
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
304
+ f" {type(prompt)}."
305
+ )
306
+ elif isinstance(negative_prompt, str):
307
+ # detect language and translate it if necessary
308
+ negative_prompt_language = detect_language(self.detection_pipeline, negative_prompt, batch_size)
309
+ if negative_prompt_language != "en":
310
+ negative_prompt = translate_prompt(
311
+ negative_prompt, self.translation_tokenizer, self.translation_model, self.device
312
+ )
313
+ if isinstance(negative_prompt, str):
314
+ uncond_tokens = [negative_prompt]
315
+ elif batch_size != len(negative_prompt):
316
+ raise ValueError(
317
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
318
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
319
+ " the batch size of `prompt`."
320
+ )
321
+ else:
322
+ # detect language and translate it if necessary
323
+ if isinstance(negative_prompt, list):
324
+ negative_prompt_languages = detect_language(self.detection_pipeline, negative_prompt, batch_size)
325
+ for index in range(batch_size):
326
+ if negative_prompt_languages[index] != "en":
327
+ p = translate_prompt(
328
+ negative_prompt[index], self.translation_tokenizer, self.translation_model, self.device
329
+ )
330
+ negative_prompt[index] = p
331
+ uncond_tokens = negative_prompt
332
+
333
+ max_length = text_input_ids.shape[-1]
334
+ uncond_input = self.tokenizer(
335
+ uncond_tokens,
336
+ padding="max_length",
337
+ max_length=max_length,
338
+ truncation=True,
339
+ return_tensors="pt",
340
+ )
341
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
342
+
343
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
344
+ seq_len = uncond_embeddings.shape[1]
345
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
346
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
347
+
348
+ # For classifier free guidance, we need to do two forward passes.
349
+ # Here we concatenate the unconditional and text embeddings into a single batch
350
+ # to avoid doing two forward passes
351
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
352
+
353
+ # get the initial random noise unless the user supplied it
354
+
355
+ # Unlike in other pipelines, latents need to be generated in the target device
356
+ # for 1-to-1 results reproducibility with the CompVis implementation.
357
+ # However this currently doesn't work in `mps`.
358
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
359
+ latents_dtype = text_embeddings.dtype
360
+ if latents is None:
361
+ if self.device.type == "mps":
362
+ # randn does not work reproducibly on mps
363
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
364
+ self.device
365
+ )
366
+ else:
367
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
368
+ else:
369
+ if latents.shape != latents_shape:
370
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
371
+ latents = latents.to(self.device)
372
+
373
+ # set timesteps
374
+ self.scheduler.set_timesteps(num_inference_steps)
375
+
376
+ # Some schedulers like PNDM have timesteps as arrays
377
+ # It's more optimized to move all timesteps to correct device beforehand
378
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
379
+
380
+ # scale the initial noise by the standard deviation required by the scheduler
381
+ latents = latents * self.scheduler.init_noise_sigma
382
+
383
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
384
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
385
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
386
+ # and should be between [0, 1]
387
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
388
+ extra_step_kwargs = {}
389
+ if accepts_eta:
390
+ extra_step_kwargs["eta"] = eta
391
+
392
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
393
+ # expand the latents if we are doing classifier free guidance
394
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
395
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
396
+
397
+ # predict the noise residual
398
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
399
+
400
+ # perform guidance
401
+ if do_classifier_free_guidance:
402
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
403
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
404
+
405
+ # compute the previous noisy sample x_t -> x_t-1
406
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
407
+
408
+ # call the callback, if provided
409
+ if callback is not None and i % callback_steps == 0:
410
+ callback(i, t, latents)
411
+
412
+ latents = 1 / 0.18215 * latents
413
+ image = self.vae.decode(latents).sample
414
+
415
+ image = (image / 2 + 0.5).clamp(0, 1)
416
+
417
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
418
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
419
+
420
+ if self.safety_checker is not None:
421
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
422
+ self.device
423
+ )
424
+ image, has_nsfw_concept = self.safety_checker(
425
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
426
+ )
427
+ else:
428
+ has_nsfw_concept = None
429
+
430
+ if output_type == "pil":
431
+ image = self.numpy_to_pil(image)
432
+
433
+ if not return_dict:
434
+ return (image, has_nsfw_concept)
435
+
436
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.11.1/one_step_unet.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ import torch
3
+
4
+ from diffusers import DiffusionPipeline
5
+
6
+
7
+ class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
8
+ def __init__(self, unet, scheduler):
9
+ super().__init__()
10
+
11
+ self.register_modules(unet=unet, scheduler=scheduler)
12
+
13
+ def __call__(self):
14
+ image = torch.randn(
15
+ (1, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size),
16
+ )
17
+ timestep = 1
18
+
19
+ model_output = self.unet(image, timestep).sample
20
+ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample
21
+
22
+ result = scheduler_output - scheduler_output + torch.ones_like(scheduler_output)
23
+
24
+ return result
v0.11.1/sd_text2img_k_diffusion.py ADDED
@@ -0,0 +1,476 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import importlib
16
+ import warnings
17
+ from typing import Callable, List, Optional, Union
18
+
19
+ import torch
20
+
21
+ from diffusers import LMSDiscreteScheduler
22
+ from diffusers.pipeline_utils import DiffusionPipeline
23
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
24
+ from diffusers.utils import is_accelerate_available, logging
25
+ from k_diffusion.external import CompVisDenoiser, CompVisVDenoiser
26
+
27
+
28
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
29
+
30
+
31
+ class ModelWrapper:
32
+ def __init__(self, model, alphas_cumprod):
33
+ self.model = model
34
+ self.alphas_cumprod = alphas_cumprod
35
+
36
+ def apply_model(self, *args, **kwargs):
37
+ if len(args) == 3:
38
+ encoder_hidden_states = args[-1]
39
+ args = args[:2]
40
+ if kwargs.get("cond", None) is not None:
41
+ encoder_hidden_states = kwargs.pop("cond")
42
+ return self.model(*args, encoder_hidden_states=encoder_hidden_states, **kwargs).sample
43
+
44
+
45
+ class StableDiffusionPipeline(DiffusionPipeline):
46
+ r"""
47
+ Pipeline for text-to-image generation using Stable Diffusion.
48
+
49
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
50
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
51
+
52
+ Args:
53
+ vae ([`AutoencoderKL`]):
54
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
55
+ text_encoder ([`CLIPTextModel`]):
56
+ Frozen text-encoder. Stable Diffusion uses the text portion of
57
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
58
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
59
+ tokenizer (`CLIPTokenizer`):
60
+ Tokenizer of class
61
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
62
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
63
+ scheduler ([`SchedulerMixin`]):
64
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
65
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
66
+ safety_checker ([`StableDiffusionSafetyChecker`]):
67
+ Classification module that estimates whether generated images could be considered offensive or harmful.
68
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
69
+ feature_extractor ([`CLIPFeatureExtractor`]):
70
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
71
+ """
72
+ _optional_components = ["safety_checker", "feature_extractor"]
73
+
74
+ def __init__(
75
+ self,
76
+ vae,
77
+ text_encoder,
78
+ tokenizer,
79
+ unet,
80
+ scheduler,
81
+ safety_checker,
82
+ feature_extractor,
83
+ ):
84
+ super().__init__()
85
+
86
+ if safety_checker is None:
87
+ logger.warning(
88
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
89
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
90
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
91
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
92
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
93
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
94
+ )
95
+
96
+ # get correct sigmas from LMS
97
+ scheduler = LMSDiscreteScheduler.from_config(scheduler.config)
98
+ self.register_modules(
99
+ vae=vae,
100
+ text_encoder=text_encoder,
101
+ tokenizer=tokenizer,
102
+ unet=unet,
103
+ scheduler=scheduler,
104
+ safety_checker=safety_checker,
105
+ feature_extractor=feature_extractor,
106
+ )
107
+
108
+ model = ModelWrapper(unet, scheduler.alphas_cumprod)
109
+ if scheduler.prediction_type == "v_prediction":
110
+ self.k_diffusion_model = CompVisVDenoiser(model)
111
+ else:
112
+ self.k_diffusion_model = CompVisDenoiser(model)
113
+
114
+ def set_sampler(self, scheduler_type: str):
115
+ warnings.warn("The `set_sampler` method is deprecated, please use `set_scheduler` instead.")
116
+ return self.set_scheduler(scheduler_type)
117
+
118
+ def set_scheduler(self, scheduler_type: str):
119
+ library = importlib.import_module("k_diffusion")
120
+ sampling = getattr(library, "sampling")
121
+ self.sampler = getattr(sampling, scheduler_type)
122
+
123
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
124
+ r"""
125
+ Enable sliced attention computation.
126
+
127
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
128
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
129
+
130
+ Args:
131
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
132
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
133
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
134
+ `attention_head_dim` must be a multiple of `slice_size`.
135
+ """
136
+ if slice_size == "auto":
137
+ # half the attention head size is usually a good trade-off between
138
+ # speed and memory
139
+ slice_size = self.unet.config.attention_head_dim // 2
140
+ self.unet.set_attention_slice(slice_size)
141
+
142
+ def disable_attention_slicing(self):
143
+ r"""
144
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
145
+ back to computing attention in one step.
146
+ """
147
+ # set slice_size = `None` to disable `attention slicing`
148
+ self.enable_attention_slicing(None)
149
+
150
+ def enable_sequential_cpu_offload(self, gpu_id=0):
151
+ r"""
152
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
153
+ text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
154
+ `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
155
+ """
156
+ if is_accelerate_available():
157
+ from accelerate import cpu_offload
158
+ else:
159
+ raise ImportError("Please install accelerate via `pip install accelerate`")
160
+
161
+ device = torch.device(f"cuda:{gpu_id}")
162
+
163
+ for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.safety_checker]:
164
+ if cpu_offloaded_model is not None:
165
+ cpu_offload(cpu_offloaded_model, device)
166
+
167
+ @property
168
+ def _execution_device(self):
169
+ r"""
170
+ Returns the device on which the pipeline's models will be executed. After calling
171
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
172
+ hooks.
173
+ """
174
+ if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
175
+ return self.device
176
+ for module in self.unet.modules():
177
+ if (
178
+ hasattr(module, "_hf_hook")
179
+ and hasattr(module._hf_hook, "execution_device")
180
+ and module._hf_hook.execution_device is not None
181
+ ):
182
+ return torch.device(module._hf_hook.execution_device)
183
+ return self.device
184
+
185
+ def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):
186
+ r"""
187
+ Encodes the prompt into text encoder hidden states.
188
+
189
+ Args:
190
+ prompt (`str` or `list(int)`):
191
+ prompt to be encoded
192
+ device: (`torch.device`):
193
+ torch device
194
+ num_images_per_prompt (`int`):
195
+ number of images that should be generated per prompt
196
+ do_classifier_free_guidance (`bool`):
197
+ whether to use classifier free guidance or not
198
+ negative_prompt (`str` or `List[str]`):
199
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
200
+ if `guidance_scale` is less than `1`).
201
+ """
202
+ batch_size = len(prompt) if isinstance(prompt, list) else 1
203
+
204
+ text_inputs = self.tokenizer(
205
+ prompt,
206
+ padding="max_length",
207
+ max_length=self.tokenizer.model_max_length,
208
+ truncation=True,
209
+ return_tensors="pt",
210
+ )
211
+ text_input_ids = text_inputs.input_ids
212
+ untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids
213
+
214
+ if not torch.equal(text_input_ids, untruncated_ids):
215
+ removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
216
+ logger.warning(
217
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
218
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
219
+ )
220
+
221
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
222
+ attention_mask = text_inputs.attention_mask.to(device)
223
+ else:
224
+ attention_mask = None
225
+
226
+ text_embeddings = self.text_encoder(
227
+ text_input_ids.to(device),
228
+ attention_mask=attention_mask,
229
+ )
230
+ text_embeddings = text_embeddings[0]
231
+
232
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
233
+ bs_embed, seq_len, _ = text_embeddings.shape
234
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
235
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
236
+
237
+ # get unconditional embeddings for classifier free guidance
238
+ if do_classifier_free_guidance:
239
+ uncond_tokens: List[str]
240
+ if negative_prompt is None:
241
+ uncond_tokens = [""] * batch_size
242
+ elif type(prompt) is not type(negative_prompt):
243
+ raise TypeError(
244
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
245
+ f" {type(prompt)}."
246
+ )
247
+ elif isinstance(negative_prompt, str):
248
+ uncond_tokens = [negative_prompt]
249
+ elif batch_size != len(negative_prompt):
250
+ raise ValueError(
251
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
252
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
253
+ " the batch size of `prompt`."
254
+ )
255
+ else:
256
+ uncond_tokens = negative_prompt
257
+
258
+ max_length = text_input_ids.shape[-1]
259
+ uncond_input = self.tokenizer(
260
+ uncond_tokens,
261
+ padding="max_length",
262
+ max_length=max_length,
263
+ truncation=True,
264
+ return_tensors="pt",
265
+ )
266
+
267
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
268
+ attention_mask = uncond_input.attention_mask.to(device)
269
+ else:
270
+ attention_mask = None
271
+
272
+ uncond_embeddings = self.text_encoder(
273
+ uncond_input.input_ids.to(device),
274
+ attention_mask=attention_mask,
275
+ )
276
+ uncond_embeddings = uncond_embeddings[0]
277
+
278
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
279
+ seq_len = uncond_embeddings.shape[1]
280
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
281
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
282
+
283
+ # For classifier free guidance, we need to do two forward passes.
284
+ # Here we concatenate the unconditional and text embeddings into a single batch
285
+ # to avoid doing two forward passes
286
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
287
+
288
+ return text_embeddings
289
+
290
+ def run_safety_checker(self, image, device, dtype):
291
+ if self.safety_checker is not None:
292
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
293
+ image, has_nsfw_concept = self.safety_checker(
294
+ images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
295
+ )
296
+ else:
297
+ has_nsfw_concept = None
298
+ return image, has_nsfw_concept
299
+
300
+ def decode_latents(self, latents):
301
+ latents = 1 / 0.18215 * latents
302
+ image = self.vae.decode(latents).sample
303
+ image = (image / 2 + 0.5).clamp(0, 1)
304
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
305
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
306
+ return image
307
+
308
+ def check_inputs(self, prompt, height, width, callback_steps):
309
+ if not isinstance(prompt, str) and not isinstance(prompt, list):
310
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
311
+
312
+ if height % 8 != 0 or width % 8 != 0:
313
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
314
+
315
+ if (callback_steps is None) or (
316
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
317
+ ):
318
+ raise ValueError(
319
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
320
+ f" {type(callback_steps)}."
321
+ )
322
+
323
+ def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
324
+ shape = (batch_size, num_channels_latents, height // 8, width // 8)
325
+ if latents is None:
326
+ if device.type == "mps":
327
+ # randn does not work reproducibly on mps
328
+ latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device)
329
+ else:
330
+ latents = torch.randn(shape, generator=generator, device=device, dtype=dtype)
331
+ else:
332
+ if latents.shape != shape:
333
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
334
+ latents = latents.to(device)
335
+
336
+ # scale the initial noise by the standard deviation required by the scheduler
337
+ return latents
338
+
339
+ @torch.no_grad()
340
+ def __call__(
341
+ self,
342
+ prompt: Union[str, List[str]],
343
+ height: int = 512,
344
+ width: int = 512,
345
+ num_inference_steps: int = 50,
346
+ guidance_scale: float = 7.5,
347
+ negative_prompt: Optional[Union[str, List[str]]] = None,
348
+ num_images_per_prompt: Optional[int] = 1,
349
+ eta: float = 0.0,
350
+ generator: Optional[torch.Generator] = None,
351
+ latents: Optional[torch.FloatTensor] = None,
352
+ output_type: Optional[str] = "pil",
353
+ return_dict: bool = True,
354
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
355
+ callback_steps: Optional[int] = 1,
356
+ **kwargs,
357
+ ):
358
+ r"""
359
+ Function invoked when calling the pipeline for generation.
360
+
361
+ Args:
362
+ prompt (`str` or `List[str]`):
363
+ The prompt or prompts to guide the image generation.
364
+ height (`int`, *optional*, defaults to 512):
365
+ The height in pixels of the generated image.
366
+ width (`int`, *optional*, defaults to 512):
367
+ The width in pixels of the generated image.
368
+ num_inference_steps (`int`, *optional*, defaults to 50):
369
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
370
+ expense of slower inference.
371
+ guidance_scale (`float`, *optional*, defaults to 7.5):
372
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
373
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
374
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
375
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
376
+ usually at the expense of lower image quality.
377
+ negative_prompt (`str` or `List[str]`, *optional*):
378
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
379
+ if `guidance_scale` is less than `1`).
380
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
381
+ The number of images to generate per prompt.
382
+ eta (`float`, *optional*, defaults to 0.0):
383
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
384
+ [`schedulers.DDIMScheduler`], will be ignored for others.
385
+ generator (`torch.Generator`, *optional*):
386
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
387
+ deterministic.
388
+ latents (`torch.FloatTensor`, *optional*):
389
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
390
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
391
+ tensor will ge generated by sampling using the supplied random `generator`.
392
+ output_type (`str`, *optional*, defaults to `"pil"`):
393
+ The output format of the generate image. Choose between
394
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
395
+ return_dict (`bool`, *optional*, defaults to `True`):
396
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
397
+ plain tuple.
398
+ callback (`Callable`, *optional*):
399
+ A function that will be called every `callback_steps` steps during inference. The function will be
400
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
401
+ callback_steps (`int`, *optional*, defaults to 1):
402
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
403
+ called at every step.
404
+
405
+ Returns:
406
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
407
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
408
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
409
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
410
+ (nsfw) content, according to the `safety_checker`.
411
+ """
412
+
413
+ # 1. Check inputs. Raise error if not correct
414
+ self.check_inputs(prompt, height, width, callback_steps)
415
+
416
+ # 2. Define call parameters
417
+ batch_size = 1 if isinstance(prompt, str) else len(prompt)
418
+ device = self._execution_device
419
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
420
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
421
+ # corresponds to doing no classifier free guidance.
422
+ do_classifier_free_guidance = True
423
+ if guidance_scale <= 1.0:
424
+ raise ValueError("has to use guidance_scale")
425
+
426
+ # 3. Encode input prompt
427
+ text_embeddings = self._encode_prompt(
428
+ prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
429
+ )
430
+
431
+ # 4. Prepare timesteps
432
+ self.scheduler.set_timesteps(num_inference_steps, device=text_embeddings.device)
433
+ sigmas = self.scheduler.sigmas
434
+ sigmas = sigmas.to(text_embeddings.dtype)
435
+
436
+ # 5. Prepare latent variables
437
+ num_channels_latents = self.unet.in_channels
438
+ latents = self.prepare_latents(
439
+ batch_size * num_images_per_prompt,
440
+ num_channels_latents,
441
+ height,
442
+ width,
443
+ text_embeddings.dtype,
444
+ device,
445
+ generator,
446
+ latents,
447
+ )
448
+ latents = latents * sigmas[0]
449
+ self.k_diffusion_model.sigmas = self.k_diffusion_model.sigmas.to(latents.device)
450
+ self.k_diffusion_model.log_sigmas = self.k_diffusion_model.log_sigmas.to(latents.device)
451
+
452
+ def model_fn(x, t):
453
+ latent_model_input = torch.cat([x] * 2)
454
+
455
+ noise_pred = self.k_diffusion_model(latent_model_input, t, cond=text_embeddings)
456
+
457
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
458
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
459
+ return noise_pred
460
+
461
+ latents = self.sampler(model_fn, latents, sigmas)
462
+
463
+ # 8. Post-processing
464
+ image = self.decode_latents(latents)
465
+
466
+ # 9. Run safety checker
467
+ image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype)
468
+
469
+ # 10. Convert to PIL
470
+ if output_type == "pil":
471
+ image = self.numpy_to_pil(image)
472
+
473
+ if not return_dict:
474
+ return (image, has_nsfw_concept)
475
+
476
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.11.1/seed_resize_stable_diffusion.py ADDED
@@ -0,0 +1,366 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ modified based on diffusion library from Huggingface: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
3
+ """
4
+ import inspect
5
+ from typing import Callable, List, Optional, Union
6
+
7
+ import torch
8
+
9
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
10
+ from diffusers.pipeline_utils import DiffusionPipeline
11
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
12
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
13
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
14
+ from diffusers.utils import logging
15
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
16
+
17
+
18
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
19
+
20
+
21
+ class SeedResizeStableDiffusionPipeline(DiffusionPipeline):
22
+ r"""
23
+ Pipeline for text-to-image generation using Stable Diffusion.
24
+
25
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
26
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
27
+
28
+ Args:
29
+ vae ([`AutoencoderKL`]):
30
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
31
+ text_encoder ([`CLIPTextModel`]):
32
+ Frozen text-encoder. Stable Diffusion uses the text portion of
33
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
34
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
35
+ tokenizer (`CLIPTokenizer`):
36
+ Tokenizer of class
37
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
38
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
39
+ scheduler ([`SchedulerMixin`]):
40
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
41
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
42
+ safety_checker ([`StableDiffusionSafetyChecker`]):
43
+ Classification module that estimates whether generated images could be considered offensive or harmful.
44
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
45
+ feature_extractor ([`CLIPFeatureExtractor`]):
46
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
47
+ """
48
+
49
+ def __init__(
50
+ self,
51
+ vae: AutoencoderKL,
52
+ text_encoder: CLIPTextModel,
53
+ tokenizer: CLIPTokenizer,
54
+ unet: UNet2DConditionModel,
55
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
56
+ safety_checker: StableDiffusionSafetyChecker,
57
+ feature_extractor: CLIPFeatureExtractor,
58
+ ):
59
+ super().__init__()
60
+ self.register_modules(
61
+ vae=vae,
62
+ text_encoder=text_encoder,
63
+ tokenizer=tokenizer,
64
+ unet=unet,
65
+ scheduler=scheduler,
66
+ safety_checker=safety_checker,
67
+ feature_extractor=feature_extractor,
68
+ )
69
+
70
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
71
+ r"""
72
+ Enable sliced attention computation.
73
+
74
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
75
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
76
+
77
+ Args:
78
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
79
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
80
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
81
+ `attention_head_dim` must be a multiple of `slice_size`.
82
+ """
83
+ if slice_size == "auto":
84
+ # half the attention head size is usually a good trade-off between
85
+ # speed and memory
86
+ slice_size = self.unet.config.attention_head_dim // 2
87
+ self.unet.set_attention_slice(slice_size)
88
+
89
+ def disable_attention_slicing(self):
90
+ r"""
91
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
92
+ back to computing attention in one step.
93
+ """
94
+ # set slice_size = `None` to disable `attention slicing`
95
+ self.enable_attention_slicing(None)
96
+
97
+ @torch.no_grad()
98
+ def __call__(
99
+ self,
100
+ prompt: Union[str, List[str]],
101
+ height: int = 512,
102
+ width: int = 512,
103
+ num_inference_steps: int = 50,
104
+ guidance_scale: float = 7.5,
105
+ negative_prompt: Optional[Union[str, List[str]]] = None,
106
+ num_images_per_prompt: Optional[int] = 1,
107
+ eta: float = 0.0,
108
+ generator: Optional[torch.Generator] = None,
109
+ latents: Optional[torch.FloatTensor] = None,
110
+ output_type: Optional[str] = "pil",
111
+ return_dict: bool = True,
112
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
113
+ callback_steps: Optional[int] = 1,
114
+ text_embeddings: Optional[torch.FloatTensor] = None,
115
+ **kwargs,
116
+ ):
117
+ r"""
118
+ Function invoked when calling the pipeline for generation.
119
+
120
+ Args:
121
+ prompt (`str` or `List[str]`):
122
+ The prompt or prompts to guide the image generation.
123
+ height (`int`, *optional*, defaults to 512):
124
+ The height in pixels of the generated image.
125
+ width (`int`, *optional*, defaults to 512):
126
+ The width in pixels of the generated image.
127
+ num_inference_steps (`int`, *optional*, defaults to 50):
128
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
129
+ expense of slower inference.
130
+ guidance_scale (`float`, *optional*, defaults to 7.5):
131
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
132
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
133
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
134
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
135
+ usually at the expense of lower image quality.
136
+ negative_prompt (`str` or `List[str]`, *optional*):
137
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
138
+ if `guidance_scale` is less than `1`).
139
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
140
+ The number of images to generate per prompt.
141
+ eta (`float`, *optional*, defaults to 0.0):
142
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
143
+ [`schedulers.DDIMScheduler`], will be ignored for others.
144
+ generator (`torch.Generator`, *optional*):
145
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
146
+ deterministic.
147
+ latents (`torch.FloatTensor`, *optional*):
148
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
149
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
150
+ tensor will ge generated by sampling using the supplied random `generator`.
151
+ output_type (`str`, *optional*, defaults to `"pil"`):
152
+ The output format of the generate image. Choose between
153
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
154
+ return_dict (`bool`, *optional*, defaults to `True`):
155
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
156
+ plain tuple.
157
+ callback (`Callable`, *optional*):
158
+ A function that will be called every `callback_steps` steps during inference. The function will be
159
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
160
+ callback_steps (`int`, *optional*, defaults to 1):
161
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
162
+ called at every step.
163
+
164
+ Returns:
165
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
166
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
167
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
168
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
169
+ (nsfw) content, according to the `safety_checker`.
170
+ """
171
+
172
+ if isinstance(prompt, str):
173
+ batch_size = 1
174
+ elif isinstance(prompt, list):
175
+ batch_size = len(prompt)
176
+ else:
177
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
178
+
179
+ if height % 8 != 0 or width % 8 != 0:
180
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
181
+
182
+ if (callback_steps is None) or (
183
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
184
+ ):
185
+ raise ValueError(
186
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
187
+ f" {type(callback_steps)}."
188
+ )
189
+
190
+ # get prompt text embeddings
191
+ text_inputs = self.tokenizer(
192
+ prompt,
193
+ padding="max_length",
194
+ max_length=self.tokenizer.model_max_length,
195
+ return_tensors="pt",
196
+ )
197
+ text_input_ids = text_inputs.input_ids
198
+
199
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
200
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
201
+ logger.warning(
202
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
203
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
204
+ )
205
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
206
+
207
+ if text_embeddings is None:
208
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
209
+
210
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
211
+ bs_embed, seq_len, _ = text_embeddings.shape
212
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
213
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
214
+
215
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
216
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
217
+ # corresponds to doing no classifier free guidance.
218
+ do_classifier_free_guidance = guidance_scale > 1.0
219
+ # get unconditional embeddings for classifier free guidance
220
+ if do_classifier_free_guidance:
221
+ uncond_tokens: List[str]
222
+ if negative_prompt is None:
223
+ uncond_tokens = [""]
224
+ elif type(prompt) is not type(negative_prompt):
225
+ raise TypeError(
226
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
227
+ f" {type(prompt)}."
228
+ )
229
+ elif isinstance(negative_prompt, str):
230
+ uncond_tokens = [negative_prompt]
231
+ elif batch_size != len(negative_prompt):
232
+ raise ValueError(
233
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
234
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
235
+ " the batch size of `prompt`."
236
+ )
237
+ else:
238
+ uncond_tokens = negative_prompt
239
+
240
+ max_length = text_input_ids.shape[-1]
241
+ uncond_input = self.tokenizer(
242
+ uncond_tokens,
243
+ padding="max_length",
244
+ max_length=max_length,
245
+ truncation=True,
246
+ return_tensors="pt",
247
+ )
248
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
249
+
250
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
251
+ seq_len = uncond_embeddings.shape[1]
252
+ uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1)
253
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
254
+
255
+ # For classifier free guidance, we need to do two forward passes.
256
+ # Here we concatenate the unconditional and text embeddings into a single batch
257
+ # to avoid doing two forward passes
258
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
259
+
260
+ # get the initial random noise unless the user supplied it
261
+
262
+ # Unlike in other pipelines, latents need to be generated in the target device
263
+ # for 1-to-1 results reproducibility with the CompVis implementation.
264
+ # However this currently doesn't work in `mps`.
265
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
266
+ latents_shape_reference = (batch_size * num_images_per_prompt, self.unet.in_channels, 64, 64)
267
+ latents_dtype = text_embeddings.dtype
268
+ if latents is None:
269
+ if self.device.type == "mps":
270
+ # randn does not exist on mps
271
+ latents_reference = torch.randn(
272
+ latents_shape_reference, generator=generator, device="cpu", dtype=latents_dtype
273
+ ).to(self.device)
274
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
275
+ self.device
276
+ )
277
+ else:
278
+ latents_reference = torch.randn(
279
+ latents_shape_reference, generator=generator, device=self.device, dtype=latents_dtype
280
+ )
281
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
282
+ else:
283
+ if latents_reference.shape != latents_shape:
284
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
285
+ latents_reference = latents_reference.to(self.device)
286
+ latents = latents.to(self.device)
287
+
288
+ # This is the key part of the pipeline where we
289
+ # try to ensure that the generated images w/ the same seed
290
+ # but different sizes actually result in similar images
291
+ dx = (latents_shape[3] - latents_shape_reference[3]) // 2
292
+ dy = (latents_shape[2] - latents_shape_reference[2]) // 2
293
+ w = latents_shape_reference[3] if dx >= 0 else latents_shape_reference[3] + 2 * dx
294
+ h = latents_shape_reference[2] if dy >= 0 else latents_shape_reference[2] + 2 * dy
295
+ tx = 0 if dx < 0 else dx
296
+ ty = 0 if dy < 0 else dy
297
+ dx = max(-dx, 0)
298
+ dy = max(-dy, 0)
299
+ # import pdb
300
+ # pdb.set_trace()
301
+ latents[:, :, ty : ty + h, tx : tx + w] = latents_reference[:, :, dy : dy + h, dx : dx + w]
302
+
303
+ # set timesteps
304
+ self.scheduler.set_timesteps(num_inference_steps)
305
+
306
+ # Some schedulers like PNDM have timesteps as arrays
307
+ # It's more optimized to move all timesteps to correct device beforehand
308
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
309
+
310
+ # scale the initial noise by the standard deviation required by the scheduler
311
+ latents = latents * self.scheduler.init_noise_sigma
312
+
313
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
314
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
315
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
316
+ # and should be between [0, 1]
317
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
318
+ extra_step_kwargs = {}
319
+ if accepts_eta:
320
+ extra_step_kwargs["eta"] = eta
321
+
322
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
323
+ # expand the latents if we are doing classifier free guidance
324
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
325
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
326
+
327
+ # predict the noise residual
328
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
329
+
330
+ # perform guidance
331
+ if do_classifier_free_guidance:
332
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
333
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
334
+
335
+ # compute the previous noisy sample x_t -> x_t-1
336
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
337
+
338
+ # call the callback, if provided
339
+ if callback is not None and i % callback_steps == 0:
340
+ callback(i, t, latents)
341
+
342
+ latents = 1 / 0.18215 * latents
343
+ image = self.vae.decode(latents).sample
344
+
345
+ image = (image / 2 + 0.5).clamp(0, 1)
346
+
347
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
348
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
349
+
350
+ if self.safety_checker is not None:
351
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
352
+ self.device
353
+ )
354
+ image, has_nsfw_concept = self.safety_checker(
355
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
356
+ )
357
+ else:
358
+ has_nsfw_concept = None
359
+
360
+ if output_type == "pil":
361
+ image = self.numpy_to_pil(image)
362
+
363
+ if not return_dict:
364
+ return (image, has_nsfw_concept)
365
+
366
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.11.1/speech_to_image_diffusion.py ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import Callable, List, Optional, Union
3
+
4
+ import torch
5
+
6
+ from diffusers import (
7
+ AutoencoderKL,
8
+ DDIMScheduler,
9
+ DiffusionPipeline,
10
+ LMSDiscreteScheduler,
11
+ PNDMScheduler,
12
+ UNet2DConditionModel,
13
+ )
14
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
15
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
16
+ from diffusers.utils import logging
17
+ from transformers import (
18
+ CLIPFeatureExtractor,
19
+ CLIPTextModel,
20
+ CLIPTokenizer,
21
+ WhisperForConditionalGeneration,
22
+ WhisperProcessor,
23
+ )
24
+
25
+
26
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
27
+
28
+
29
+ class SpeechToImagePipeline(DiffusionPipeline):
30
+ def __init__(
31
+ self,
32
+ speech_model: WhisperForConditionalGeneration,
33
+ speech_processor: WhisperProcessor,
34
+ vae: AutoencoderKL,
35
+ text_encoder: CLIPTextModel,
36
+ tokenizer: CLIPTokenizer,
37
+ unet: UNet2DConditionModel,
38
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
39
+ safety_checker: StableDiffusionSafetyChecker,
40
+ feature_extractor: CLIPFeatureExtractor,
41
+ ):
42
+ super().__init__()
43
+
44
+ if safety_checker is None:
45
+ logger.warning(
46
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
47
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
48
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
49
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
50
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
51
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
52
+ )
53
+
54
+ self.register_modules(
55
+ speech_model=speech_model,
56
+ speech_processor=speech_processor,
57
+ vae=vae,
58
+ text_encoder=text_encoder,
59
+ tokenizer=tokenizer,
60
+ unet=unet,
61
+ scheduler=scheduler,
62
+ feature_extractor=feature_extractor,
63
+ )
64
+
65
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
66
+ if slice_size == "auto":
67
+ slice_size = self.unet.config.attention_head_dim // 2
68
+ self.unet.set_attention_slice(slice_size)
69
+
70
+ def disable_attention_slicing(self):
71
+ self.enable_attention_slicing(None)
72
+
73
+ @torch.no_grad()
74
+ def __call__(
75
+ self,
76
+ audio,
77
+ sampling_rate=16_000,
78
+ height: int = 512,
79
+ width: int = 512,
80
+ num_inference_steps: int = 50,
81
+ guidance_scale: float = 7.5,
82
+ negative_prompt: Optional[Union[str, List[str]]] = None,
83
+ num_images_per_prompt: Optional[int] = 1,
84
+ eta: float = 0.0,
85
+ generator: Optional[torch.Generator] = None,
86
+ latents: Optional[torch.FloatTensor] = None,
87
+ output_type: Optional[str] = "pil",
88
+ return_dict: bool = True,
89
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
90
+ callback_steps: Optional[int] = 1,
91
+ **kwargs,
92
+ ):
93
+ inputs = self.speech_processor.feature_extractor(
94
+ audio, return_tensors="pt", sampling_rate=sampling_rate
95
+ ).input_features.to(self.device)
96
+ predicted_ids = self.speech_model.generate(inputs, max_length=480_000)
97
+
98
+ prompt = self.speech_processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True, normalize=True)[
99
+ 0
100
+ ]
101
+
102
+ if isinstance(prompt, str):
103
+ batch_size = 1
104
+ elif isinstance(prompt, list):
105
+ batch_size = len(prompt)
106
+ else:
107
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
108
+
109
+ if height % 8 != 0 or width % 8 != 0:
110
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
111
+
112
+ if (callback_steps is None) or (
113
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
114
+ ):
115
+ raise ValueError(
116
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
117
+ f" {type(callback_steps)}."
118
+ )
119
+
120
+ # get prompt text embeddings
121
+ text_inputs = self.tokenizer(
122
+ prompt,
123
+ padding="max_length",
124
+ max_length=self.tokenizer.model_max_length,
125
+ return_tensors="pt",
126
+ )
127
+ text_input_ids = text_inputs.input_ids
128
+
129
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
130
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
131
+ logger.warning(
132
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
133
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
134
+ )
135
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
136
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
137
+
138
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
139
+ bs_embed, seq_len, _ = text_embeddings.shape
140
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
141
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
142
+
143
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
144
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
145
+ # corresponds to doing no classifier free guidance.
146
+ do_classifier_free_guidance = guidance_scale > 1.0
147
+ # get unconditional embeddings for classifier free guidance
148
+ if do_classifier_free_guidance:
149
+ uncond_tokens: List[str]
150
+ if negative_prompt is None:
151
+ uncond_tokens = [""] * batch_size
152
+ elif type(prompt) is not type(negative_prompt):
153
+ raise TypeError(
154
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
155
+ f" {type(prompt)}."
156
+ )
157
+ elif isinstance(negative_prompt, str):
158
+ uncond_tokens = [negative_prompt]
159
+ elif batch_size != len(negative_prompt):
160
+ raise ValueError(
161
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
162
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
163
+ " the batch size of `prompt`."
164
+ )
165
+ else:
166
+ uncond_tokens = negative_prompt
167
+
168
+ max_length = text_input_ids.shape[-1]
169
+ uncond_input = self.tokenizer(
170
+ uncond_tokens,
171
+ padding="max_length",
172
+ max_length=max_length,
173
+ truncation=True,
174
+ return_tensors="pt",
175
+ )
176
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
177
+
178
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
179
+ seq_len = uncond_embeddings.shape[1]
180
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
181
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
182
+
183
+ # For classifier free guidance, we need to do two forward passes.
184
+ # Here we concatenate the unconditional and text embeddings into a single batch
185
+ # to avoid doing two forward passes
186
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
187
+
188
+ # get the initial random noise unless the user supplied it
189
+
190
+ # Unlike in other pipelines, latents need to be generated in the target device
191
+ # for 1-to-1 results reproducibility with the CompVis implementation.
192
+ # However this currently doesn't work in `mps`.
193
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
194
+ latents_dtype = text_embeddings.dtype
195
+ if latents is None:
196
+ if self.device.type == "mps":
197
+ # randn does not exist on mps
198
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
199
+ self.device
200
+ )
201
+ else:
202
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
203
+ else:
204
+ if latents.shape != latents_shape:
205
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
206
+ latents = latents.to(self.device)
207
+
208
+ # set timesteps
209
+ self.scheduler.set_timesteps(num_inference_steps)
210
+
211
+ # Some schedulers like PNDM have timesteps as arrays
212
+ # It's more optimized to move all timesteps to correct device beforehand
213
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
214
+
215
+ # scale the initial noise by the standard deviation required by the scheduler
216
+ latents = latents * self.scheduler.init_noise_sigma
217
+
218
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
219
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
220
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
221
+ # and should be between [0, 1]
222
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
223
+ extra_step_kwargs = {}
224
+ if accepts_eta:
225
+ extra_step_kwargs["eta"] = eta
226
+
227
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
228
+ # expand the latents if we are doing classifier free guidance
229
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
230
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
231
+
232
+ # predict the noise residual
233
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
234
+
235
+ # perform guidance
236
+ if do_classifier_free_guidance:
237
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
238
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
239
+
240
+ # compute the previous noisy sample x_t -> x_t-1
241
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
242
+
243
+ # call the callback, if provided
244
+ if callback is not None and i % callback_steps == 0:
245
+ callback(i, t, latents)
246
+
247
+ latents = 1 / 0.18215 * latents
248
+ image = self.vae.decode(latents).sample
249
+
250
+ image = (image / 2 + 0.5).clamp(0, 1)
251
+
252
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
253
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
254
+
255
+ if output_type == "pil":
256
+ image = self.numpy_to_pil(image)
257
+
258
+ if not return_dict:
259
+ return image
260
+
261
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None)
v0.11.1/stable_diffusion_comparison.py ADDED
@@ -0,0 +1,405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any, Callable, Dict, List, Optional, Union
2
+
3
+ import torch
4
+
5
+ from diffusers import (
6
+ AutoencoderKL,
7
+ DDIMScheduler,
8
+ DiffusionPipeline,
9
+ LMSDiscreteScheduler,
10
+ PNDMScheduler,
11
+ StableDiffusionPipeline,
12
+ UNet2DConditionModel,
13
+ )
14
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
15
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
16
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
17
+
18
+
19
+ pipe1_model_id = "CompVis/stable-diffusion-v1-1"
20
+ pipe2_model_id = "CompVis/stable-diffusion-v1-2"
21
+ pipe3_model_id = "CompVis/stable-diffusion-v1-3"
22
+ pipe4_model_id = "CompVis/stable-diffusion-v1-4"
23
+
24
+
25
+ class StableDiffusionComparisonPipeline(DiffusionPipeline):
26
+ r"""
27
+ Pipeline for parallel comparison of Stable Diffusion v1-v4
28
+ This pipeline inherits from DiffusionPipeline and depends on the use of an Auth Token for
29
+ downloading pre-trained checkpoints from Hugging Face Hub.
30
+ If using Hugging Face Hub, pass the Model ID for Stable Diffusion v1.4 as the previous 3 checkpoints will be loaded
31
+ automatically.
32
+ Args:
33
+ vae ([`AutoencoderKL`]):
34
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
35
+ text_encoder ([`CLIPTextModel`]):
36
+ Frozen text-encoder. Stable Diffusion uses the text portion of
37
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
38
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
39
+ tokenizer (`CLIPTokenizer`):
40
+ Tokenizer of class
41
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
42
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
43
+ scheduler ([`SchedulerMixin`]):
44
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
45
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
46
+ safety_checker ([`StableDiffusionMegaSafetyChecker`]):
47
+ Classification module that estimates whether generated images could be considered offensive or harmful.
48
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
49
+ feature_extractor ([`CLIPFeatureExtractor`]):
50
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
51
+ """
52
+
53
+ def __init__(
54
+ self,
55
+ vae: AutoencoderKL,
56
+ text_encoder: CLIPTextModel,
57
+ tokenizer: CLIPTokenizer,
58
+ unet: UNet2DConditionModel,
59
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
60
+ safety_checker: StableDiffusionSafetyChecker,
61
+ feature_extractor: CLIPFeatureExtractor,
62
+ requires_safety_checker: bool = True,
63
+ ):
64
+ super()._init_()
65
+
66
+ self.pipe1 = StableDiffusionPipeline.from_pretrained(pipe1_model_id)
67
+ self.pipe2 = StableDiffusionPipeline.from_pretrained(pipe2_model_id)
68
+ self.pipe3 = StableDiffusionPipeline.from_pretrained(pipe3_model_id)
69
+ self.pipe4 = StableDiffusionPipeline(
70
+ vae=vae,
71
+ text_encoder=text_encoder,
72
+ tokenizer=tokenizer,
73
+ unet=unet,
74
+ scheduler=scheduler,
75
+ safety_checker=safety_checker,
76
+ feature_extractor=feature_extractor,
77
+ requires_safety_checker=requires_safety_checker,
78
+ )
79
+
80
+ self.register_modules(pipeline1=self.pipe1, pipeline2=self.pipe2, pipeline3=self.pipe3, pipeline4=self.pipe4)
81
+
82
+ @property
83
+ def layers(self) -> Dict[str, Any]:
84
+ return {k: getattr(self, k) for k in self.config.keys() if not k.startswith("_")}
85
+
86
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
87
+ r"""
88
+ Enable sliced attention computation.
89
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
90
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
91
+ Args:
92
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
93
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
94
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
95
+ `attention_head_dim` must be a multiple of `slice_size`.
96
+ """
97
+ if slice_size == "auto":
98
+ # half the attention head size is usually a good trade-off between
99
+ # speed and memory
100
+ slice_size = self.unet.config.attention_head_dim // 2
101
+ self.unet.set_attention_slice(slice_size)
102
+
103
+ def disable_attention_slicing(self):
104
+ r"""
105
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
106
+ back to computing attention in one step.
107
+ """
108
+ # set slice_size = `None` to disable `attention slicing`
109
+ self.enable_attention_slicing(None)
110
+
111
+ @torch.no_grad()
112
+ def text2img_sd1_1(
113
+ self,
114
+ prompt: Union[str, List[str]],
115
+ height: int = 512,
116
+ width: int = 512,
117
+ num_inference_steps: int = 50,
118
+ guidance_scale: float = 7.5,
119
+ negative_prompt: Optional[Union[str, List[str]]] = None,
120
+ num_images_per_prompt: Optional[int] = 1,
121
+ eta: float = 0.0,
122
+ generator: Optional[torch.Generator] = None,
123
+ latents: Optional[torch.FloatTensor] = None,
124
+ output_type: Optional[str] = "pil",
125
+ return_dict: bool = True,
126
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
127
+ callback_steps: Optional[int] = 1,
128
+ **kwargs,
129
+ ):
130
+ return self.pipe1(
131
+ prompt=prompt,
132
+ height=height,
133
+ width=width,
134
+ num_inference_steps=num_inference_steps,
135
+ guidance_scale=guidance_scale,
136
+ negative_prompt=negative_prompt,
137
+ num_images_per_prompt=num_images_per_prompt,
138
+ eta=eta,
139
+ generator=generator,
140
+ latents=latents,
141
+ output_type=output_type,
142
+ return_dict=return_dict,
143
+ callback=callback,
144
+ callback_steps=callback_steps,
145
+ **kwargs,
146
+ )
147
+
148
+ @torch.no_grad()
149
+ def text2img_sd1_2(
150
+ self,
151
+ prompt: Union[str, List[str]],
152
+ height: int = 512,
153
+ width: int = 512,
154
+ num_inference_steps: int = 50,
155
+ guidance_scale: float = 7.5,
156
+ negative_prompt: Optional[Union[str, List[str]]] = None,
157
+ num_images_per_prompt: Optional[int] = 1,
158
+ eta: float = 0.0,
159
+ generator: Optional[torch.Generator] = None,
160
+ latents: Optional[torch.FloatTensor] = None,
161
+ output_type: Optional[str] = "pil",
162
+ return_dict: bool = True,
163
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
164
+ callback_steps: Optional[int] = 1,
165
+ **kwargs,
166
+ ):
167
+ return self.pipe2(
168
+ prompt=prompt,
169
+ height=height,
170
+ width=width,
171
+ num_inference_steps=num_inference_steps,
172
+ guidance_scale=guidance_scale,
173
+ negative_prompt=negative_prompt,
174
+ num_images_per_prompt=num_images_per_prompt,
175
+ eta=eta,
176
+ generator=generator,
177
+ latents=latents,
178
+ output_type=output_type,
179
+ return_dict=return_dict,
180
+ callback=callback,
181
+ callback_steps=callback_steps,
182
+ **kwargs,
183
+ )
184
+
185
+ @torch.no_grad()
186
+ def text2img_sd1_3(
187
+ self,
188
+ prompt: Union[str, List[str]],
189
+ height: int = 512,
190
+ width: int = 512,
191
+ num_inference_steps: int = 50,
192
+ guidance_scale: float = 7.5,
193
+ negative_prompt: Optional[Union[str, List[str]]] = None,
194
+ num_images_per_prompt: Optional[int] = 1,
195
+ eta: float = 0.0,
196
+ generator: Optional[torch.Generator] = None,
197
+ latents: Optional[torch.FloatTensor] = None,
198
+ output_type: Optional[str] = "pil",
199
+ return_dict: bool = True,
200
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
201
+ callback_steps: Optional[int] = 1,
202
+ **kwargs,
203
+ ):
204
+ return self.pipe3(
205
+ prompt=prompt,
206
+ height=height,
207
+ width=width,
208
+ num_inference_steps=num_inference_steps,
209
+ guidance_scale=guidance_scale,
210
+ negative_prompt=negative_prompt,
211
+ num_images_per_prompt=num_images_per_prompt,
212
+ eta=eta,
213
+ generator=generator,
214
+ latents=latents,
215
+ output_type=output_type,
216
+ return_dict=return_dict,
217
+ callback=callback,
218
+ callback_steps=callback_steps,
219
+ **kwargs,
220
+ )
221
+
222
+ @torch.no_grad()
223
+ def text2img_sd1_4(
224
+ self,
225
+ prompt: Union[str, List[str]],
226
+ height: int = 512,
227
+ width: int = 512,
228
+ num_inference_steps: int = 50,
229
+ guidance_scale: float = 7.5,
230
+ negative_prompt: Optional[Union[str, List[str]]] = None,
231
+ num_images_per_prompt: Optional[int] = 1,
232
+ eta: float = 0.0,
233
+ generator: Optional[torch.Generator] = None,
234
+ latents: Optional[torch.FloatTensor] = None,
235
+ output_type: Optional[str] = "pil",
236
+ return_dict: bool = True,
237
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
238
+ callback_steps: Optional[int] = 1,
239
+ **kwargs,
240
+ ):
241
+ return self.pipe4(
242
+ prompt=prompt,
243
+ height=height,
244
+ width=width,
245
+ num_inference_steps=num_inference_steps,
246
+ guidance_scale=guidance_scale,
247
+ negative_prompt=negative_prompt,
248
+ num_images_per_prompt=num_images_per_prompt,
249
+ eta=eta,
250
+ generator=generator,
251
+ latents=latents,
252
+ output_type=output_type,
253
+ return_dict=return_dict,
254
+ callback=callback,
255
+ callback_steps=callback_steps,
256
+ **kwargs,
257
+ )
258
+
259
+ @torch.no_grad()
260
+ def _call_(
261
+ self,
262
+ prompt: Union[str, List[str]],
263
+ height: int = 512,
264
+ width: int = 512,
265
+ num_inference_steps: int = 50,
266
+ guidance_scale: float = 7.5,
267
+ negative_prompt: Optional[Union[str, List[str]]] = None,
268
+ num_images_per_prompt: Optional[int] = 1,
269
+ eta: float = 0.0,
270
+ generator: Optional[torch.Generator] = None,
271
+ latents: Optional[torch.FloatTensor] = None,
272
+ output_type: Optional[str] = "pil",
273
+ return_dict: bool = True,
274
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
275
+ callback_steps: Optional[int] = 1,
276
+ **kwargs,
277
+ ):
278
+ r"""
279
+ Function invoked when calling the pipeline for generation. This function will generate 4 results as part
280
+ of running all the 4 pipelines for SD1.1-1.4 together in a serial-processing, parallel-invocation fashion.
281
+ Args:
282
+ prompt (`str` or `List[str]`):
283
+ The prompt or prompts to guide the image generation.
284
+ height (`int`, optional, defaults to 512):
285
+ The height in pixels of the generated image.
286
+ width (`int`, optional, defaults to 512):
287
+ The width in pixels of the generated image.
288
+ num_inference_steps (`int`, optional, defaults to 50):
289
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
290
+ expense of slower inference.
291
+ guidance_scale (`float`, optional, defaults to 7.5):
292
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
293
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
294
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
295
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
296
+ usually at the expense of lower image quality.
297
+ eta (`float`, optional, defaults to 0.0):
298
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
299
+ [`schedulers.DDIMScheduler`], will be ignored for others.
300
+ generator (`torch.Generator`, optional):
301
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
302
+ deterministic.
303
+ latents (`torch.FloatTensor`, optional):
304
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
305
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
306
+ tensor will ge generated by sampling using the supplied random `generator`.
307
+ output_type (`str`, optional, defaults to `"pil"`):
308
+ The output format of the generate image. Choose between
309
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
310
+ return_dict (`bool`, optional, defaults to `True`):
311
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
312
+ plain tuple.
313
+ Returns:
314
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
315
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
316
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
317
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
318
+ (nsfw) content, according to the `safety_checker`.
319
+ """
320
+
321
+ device = "cuda" if torch.cuda.is_available() else "cpu"
322
+ self.to(device)
323
+
324
+ # Checks if the height and width are divisible by 8 or not
325
+ if height % 8 != 0 or width % 8 != 0:
326
+ raise ValueError(f"`height` and `width` must be divisible by 8 but are {height} and {width}.")
327
+
328
+ # Get first result from Stable Diffusion Checkpoint v1.1
329
+ res1 = self.text2img_sd1_1(
330
+ prompt=prompt,
331
+ height=height,
332
+ width=width,
333
+ num_inference_steps=num_inference_steps,
334
+ guidance_scale=guidance_scale,
335
+ negative_prompt=negative_prompt,
336
+ num_images_per_prompt=num_images_per_prompt,
337
+ eta=eta,
338
+ generator=generator,
339
+ latents=latents,
340
+ output_type=output_type,
341
+ return_dict=return_dict,
342
+ callback=callback,
343
+ callback_steps=callback_steps,
344
+ **kwargs,
345
+ )
346
+
347
+ # Get first result from Stable Diffusion Checkpoint v1.2
348
+ res2 = self.text2img_sd1_2(
349
+ prompt=prompt,
350
+ height=height,
351
+ width=width,
352
+ num_inference_steps=num_inference_steps,
353
+ guidance_scale=guidance_scale,
354
+ negative_prompt=negative_prompt,
355
+ num_images_per_prompt=num_images_per_prompt,
356
+ eta=eta,
357
+ generator=generator,
358
+ latents=latents,
359
+ output_type=output_type,
360
+ return_dict=return_dict,
361
+ callback=callback,
362
+ callback_steps=callback_steps,
363
+ **kwargs,
364
+ )
365
+
366
+ # Get first result from Stable Diffusion Checkpoint v1.3
367
+ res3 = self.text2img_sd1_3(
368
+ prompt=prompt,
369
+ height=height,
370
+ width=width,
371
+ num_inference_steps=num_inference_steps,
372
+ guidance_scale=guidance_scale,
373
+ negative_prompt=negative_prompt,
374
+ num_images_per_prompt=num_images_per_prompt,
375
+ eta=eta,
376
+ generator=generator,
377
+ latents=latents,
378
+ output_type=output_type,
379
+ return_dict=return_dict,
380
+ callback=callback,
381
+ callback_steps=callback_steps,
382
+ **kwargs,
383
+ )
384
+
385
+ # Get first result from Stable Diffusion Checkpoint v1.4
386
+ res4 = self.text2img_sd1_4(
387
+ prompt=prompt,
388
+ height=height,
389
+ width=width,
390
+ num_inference_steps=num_inference_steps,
391
+ guidance_scale=guidance_scale,
392
+ negative_prompt=negative_prompt,
393
+ num_images_per_prompt=num_images_per_prompt,
394
+ eta=eta,
395
+ generator=generator,
396
+ latents=latents,
397
+ output_type=output_type,
398
+ return_dict=return_dict,
399
+ callback=callback,
400
+ callback_steps=callback_steps,
401
+ **kwargs,
402
+ )
403
+
404
+ # Get all result images into a single list and pass it via StableDiffusionPipelineOutput for final result
405
+ return StableDiffusionPipelineOutput([res1[0], res2[0], res3[0], res4[0]])
v0.11.1/stable_diffusion_mega.py ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any, Callable, Dict, List, Optional, Union
2
+
3
+ import torch
4
+
5
+ import PIL.Image
6
+ from diffusers import (
7
+ AutoencoderKL,
8
+ DDIMScheduler,
9
+ DiffusionPipeline,
10
+ LMSDiscreteScheduler,
11
+ PNDMScheduler,
12
+ StableDiffusionImg2ImgPipeline,
13
+ StableDiffusionInpaintPipelineLegacy,
14
+ StableDiffusionPipeline,
15
+ UNet2DConditionModel,
16
+ )
17
+ from diffusers.configuration_utils import FrozenDict
18
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
19
+ from diffusers.utils import deprecate, logging
20
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
21
+
22
+
23
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
24
+
25
+
26
+ class StableDiffusionMegaPipeline(DiffusionPipeline):
27
+ r"""
28
+ Pipeline for text-to-image generation using Stable Diffusion.
29
+
30
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
31
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
32
+
33
+ Args:
34
+ vae ([`AutoencoderKL`]):
35
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
36
+ text_encoder ([`CLIPTextModel`]):
37
+ Frozen text-encoder. Stable Diffusion uses the text portion of
38
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
39
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
40
+ tokenizer (`CLIPTokenizer`):
41
+ Tokenizer of class
42
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
43
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
44
+ scheduler ([`SchedulerMixin`]):
45
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
46
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
47
+ safety_checker ([`StableDiffusionMegaSafetyChecker`]):
48
+ Classification module that estimates whether generated images could be considered offensive or harmful.
49
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
50
+ feature_extractor ([`CLIPFeatureExtractor`]):
51
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
52
+ """
53
+ _optional_components = ["safety_checker", "feature_extractor"]
54
+
55
+ def __init__(
56
+ self,
57
+ vae: AutoencoderKL,
58
+ text_encoder: CLIPTextModel,
59
+ tokenizer: CLIPTokenizer,
60
+ unet: UNet2DConditionModel,
61
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
62
+ safety_checker: StableDiffusionSafetyChecker,
63
+ feature_extractor: CLIPFeatureExtractor,
64
+ requires_safety_checker: bool = True,
65
+ ):
66
+ super().__init__()
67
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
68
+ deprecation_message = (
69
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
70
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
71
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
72
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
73
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
74
+ " file"
75
+ )
76
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
77
+ new_config = dict(scheduler.config)
78
+ new_config["steps_offset"] = 1
79
+ scheduler._internal_dict = FrozenDict(new_config)
80
+
81
+ self.register_modules(
82
+ vae=vae,
83
+ text_encoder=text_encoder,
84
+ tokenizer=tokenizer,
85
+ unet=unet,
86
+ scheduler=scheduler,
87
+ safety_checker=safety_checker,
88
+ feature_extractor=feature_extractor,
89
+ )
90
+ self.register_to_config(requires_safety_checker=requires_safety_checker)
91
+
92
+ @property
93
+ def components(self) -> Dict[str, Any]:
94
+ return {k: getattr(self, k) for k in self.config.keys() if not k.startswith("_")}
95
+
96
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
97
+ r"""
98
+ Enable sliced attention computation.
99
+
100
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
101
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
102
+
103
+ Args:
104
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
105
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
106
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
107
+ `attention_head_dim` must be a multiple of `slice_size`.
108
+ """
109
+ if slice_size == "auto":
110
+ # half the attention head size is usually a good trade-off between
111
+ # speed and memory
112
+ slice_size = self.unet.config.attention_head_dim // 2
113
+ self.unet.set_attention_slice(slice_size)
114
+
115
+ def disable_attention_slicing(self):
116
+ r"""
117
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
118
+ back to computing attention in one step.
119
+ """
120
+ # set slice_size = `None` to disable `attention slicing`
121
+ self.enable_attention_slicing(None)
122
+
123
+ @torch.no_grad()
124
+ def inpaint(
125
+ self,
126
+ prompt: Union[str, List[str]],
127
+ image: Union[torch.FloatTensor, PIL.Image.Image],
128
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image],
129
+ strength: float = 0.8,
130
+ num_inference_steps: Optional[int] = 50,
131
+ guidance_scale: Optional[float] = 7.5,
132
+ negative_prompt: Optional[Union[str, List[str]]] = None,
133
+ num_images_per_prompt: Optional[int] = 1,
134
+ eta: Optional[float] = 0.0,
135
+ generator: Optional[torch.Generator] = None,
136
+ output_type: Optional[str] = "pil",
137
+ return_dict: bool = True,
138
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
139
+ callback_steps: Optional[int] = 1,
140
+ ):
141
+ # For more information on how this function works, please see: https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion#diffusers.StableDiffusionImg2ImgPipeline
142
+ return StableDiffusionInpaintPipelineLegacy(**self.components)(
143
+ prompt=prompt,
144
+ image=image,
145
+ mask_image=mask_image,
146
+ strength=strength,
147
+ num_inference_steps=num_inference_steps,
148
+ guidance_scale=guidance_scale,
149
+ negative_prompt=negative_prompt,
150
+ num_images_per_prompt=num_images_per_prompt,
151
+ eta=eta,
152
+ generator=generator,
153
+ output_type=output_type,
154
+ return_dict=return_dict,
155
+ callback=callback,
156
+ )
157
+
158
+ @torch.no_grad()
159
+ def img2img(
160
+ self,
161
+ prompt: Union[str, List[str]],
162
+ image: Union[torch.FloatTensor, PIL.Image.Image],
163
+ strength: float = 0.8,
164
+ num_inference_steps: Optional[int] = 50,
165
+ guidance_scale: Optional[float] = 7.5,
166
+ negative_prompt: Optional[Union[str, List[str]]] = None,
167
+ num_images_per_prompt: Optional[int] = 1,
168
+ eta: Optional[float] = 0.0,
169
+ generator: Optional[torch.Generator] = None,
170
+ output_type: Optional[str] = "pil",
171
+ return_dict: bool = True,
172
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
173
+ callback_steps: Optional[int] = 1,
174
+ **kwargs,
175
+ ):
176
+ # For more information on how this function works, please see: https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion#diffusers.StableDiffusionImg2ImgPipeline
177
+ return StableDiffusionImg2ImgPipeline(**self.components)(
178
+ prompt=prompt,
179
+ image=image,
180
+ strength=strength,
181
+ num_inference_steps=num_inference_steps,
182
+ guidance_scale=guidance_scale,
183
+ negative_prompt=negative_prompt,
184
+ num_images_per_prompt=num_images_per_prompt,
185
+ eta=eta,
186
+ generator=generator,
187
+ output_type=output_type,
188
+ return_dict=return_dict,
189
+ callback=callback,
190
+ callback_steps=callback_steps,
191
+ )
192
+
193
+ @torch.no_grad()
194
+ def text2img(
195
+ self,
196
+ prompt: Union[str, List[str]],
197
+ height: int = 512,
198
+ width: int = 512,
199
+ num_inference_steps: int = 50,
200
+ guidance_scale: float = 7.5,
201
+ negative_prompt: Optional[Union[str, List[str]]] = None,
202
+ num_images_per_prompt: Optional[int] = 1,
203
+ eta: float = 0.0,
204
+ generator: Optional[torch.Generator] = None,
205
+ latents: Optional[torch.FloatTensor] = None,
206
+ output_type: Optional[str] = "pil",
207
+ return_dict: bool = True,
208
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
209
+ callback_steps: Optional[int] = 1,
210
+ ):
211
+ # For more information on how this function https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion#diffusers.StableDiffusionPipeline
212
+ return StableDiffusionPipeline(**self.components)(
213
+ prompt=prompt,
214
+ height=height,
215
+ width=width,
216
+ num_inference_steps=num_inference_steps,
217
+ guidance_scale=guidance_scale,
218
+ negative_prompt=negative_prompt,
219
+ num_images_per_prompt=num_images_per_prompt,
220
+ eta=eta,
221
+ generator=generator,
222
+ latents=latents,
223
+ output_type=output_type,
224
+ return_dict=return_dict,
225
+ callback=callback,
226
+ callback_steps=callback_steps,
227
+ )
v0.11.1/text_inpainting.py ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Callable, List, Optional, Union
2
+
3
+ import torch
4
+
5
+ import PIL
6
+ from diffusers.configuration_utils import FrozenDict
7
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
8
+ from diffusers.pipeline_utils import DiffusionPipeline
9
+ from diffusers.pipelines.stable_diffusion import StableDiffusionInpaintPipeline
10
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
11
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
12
+ from diffusers.utils import deprecate, is_accelerate_available, logging
13
+ from transformers import (
14
+ CLIPFeatureExtractor,
15
+ CLIPSegForImageSegmentation,
16
+ CLIPSegProcessor,
17
+ CLIPTextModel,
18
+ CLIPTokenizer,
19
+ )
20
+
21
+
22
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
23
+
24
+
25
+ class TextInpainting(DiffusionPipeline):
26
+ r"""
27
+ Pipeline for text based inpainting using Stable Diffusion.
28
+ Uses CLIPSeg to get a mask from the given text, then calls the Inpainting pipeline with the generated mask
29
+
30
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
31
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
32
+
33
+ Args:
34
+ segmentation_model ([`CLIPSegForImageSegmentation`]):
35
+ CLIPSeg Model to generate mask from the given text. Please refer to the [model card]() for details.
36
+ segmentation_processor ([`CLIPSegProcessor`]):
37
+ CLIPSeg processor to get image, text features to translate prompt to English, if necessary. Please refer to the
38
+ [model card](https://huggingface.co/docs/transformers/model_doc/clipseg) for details.
39
+ vae ([`AutoencoderKL`]):
40
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
41
+ text_encoder ([`CLIPTextModel`]):
42
+ Frozen text-encoder. Stable Diffusion uses the text portion of
43
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
44
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
45
+ tokenizer (`CLIPTokenizer`):
46
+ Tokenizer of class
47
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
48
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
49
+ scheduler ([`SchedulerMixin`]):
50
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
51
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
52
+ safety_checker ([`StableDiffusionSafetyChecker`]):
53
+ Classification module that estimates whether generated images could be considered offensive or harmful.
54
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
55
+ feature_extractor ([`CLIPFeatureExtractor`]):
56
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
57
+ """
58
+
59
+ def __init__(
60
+ self,
61
+ segmentation_model: CLIPSegForImageSegmentation,
62
+ segmentation_processor: CLIPSegProcessor,
63
+ vae: AutoencoderKL,
64
+ text_encoder: CLIPTextModel,
65
+ tokenizer: CLIPTokenizer,
66
+ unet: UNet2DConditionModel,
67
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
68
+ safety_checker: StableDiffusionSafetyChecker,
69
+ feature_extractor: CLIPFeatureExtractor,
70
+ ):
71
+ super().__init__()
72
+
73
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
74
+ deprecation_message = (
75
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
76
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
77
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
78
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
79
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
80
+ " file"
81
+ )
82
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
83
+ new_config = dict(scheduler.config)
84
+ new_config["steps_offset"] = 1
85
+ scheduler._internal_dict = FrozenDict(new_config)
86
+
87
+ if hasattr(scheduler.config, "skip_prk_steps") and scheduler.config.skip_prk_steps is False:
88
+ deprecation_message = (
89
+ f"The configuration file of this scheduler: {scheduler} has not set the configuration"
90
+ " `skip_prk_steps`. `skip_prk_steps` should be set to True in the configuration file. Please make"
91
+ " sure to update the config accordingly as not setting `skip_prk_steps` in the config might lead to"
92
+ " incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face"
93
+ " Hub, it would be very nice if you could open a Pull request for the"
94
+ " `scheduler/scheduler_config.json` file"
95
+ )
96
+ deprecate("skip_prk_steps not set", "1.0.0", deprecation_message, standard_warn=False)
97
+ new_config = dict(scheduler.config)
98
+ new_config["skip_prk_steps"] = True
99
+ scheduler._internal_dict = FrozenDict(new_config)
100
+
101
+ if safety_checker is None:
102
+ logger.warning(
103
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
104
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
105
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
106
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
107
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
108
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
109
+ )
110
+
111
+ self.register_modules(
112
+ segmentation_model=segmentation_model,
113
+ segmentation_processor=segmentation_processor,
114
+ vae=vae,
115
+ text_encoder=text_encoder,
116
+ tokenizer=tokenizer,
117
+ unet=unet,
118
+ scheduler=scheduler,
119
+ safety_checker=safety_checker,
120
+ feature_extractor=feature_extractor,
121
+ )
122
+
123
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
124
+ r"""
125
+ Enable sliced attention computation.
126
+
127
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
128
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
129
+
130
+ Args:
131
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
132
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
133
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
134
+ `attention_head_dim` must be a multiple of `slice_size`.
135
+ """
136
+ if slice_size == "auto":
137
+ # half the attention head size is usually a good trade-off between
138
+ # speed and memory
139
+ slice_size = self.unet.config.attention_head_dim // 2
140
+ self.unet.set_attention_slice(slice_size)
141
+
142
+ def disable_attention_slicing(self):
143
+ r"""
144
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
145
+ back to computing attention in one step.
146
+ """
147
+ # set slice_size = `None` to disable `attention slicing`
148
+ self.enable_attention_slicing(None)
149
+
150
+ def enable_sequential_cpu_offload(self):
151
+ r"""
152
+ Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
153
+ text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
154
+ `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
155
+ """
156
+ if is_accelerate_available():
157
+ from accelerate import cpu_offload
158
+ else:
159
+ raise ImportError("Please install accelerate via `pip install accelerate`")
160
+
161
+ device = torch.device("cuda")
162
+
163
+ for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.safety_checker]:
164
+ if cpu_offloaded_model is not None:
165
+ cpu_offload(cpu_offloaded_model, device)
166
+
167
+ @property
168
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device
169
+ def _execution_device(self):
170
+ r"""
171
+ Returns the device on which the pipeline's models will be executed. After calling
172
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
173
+ hooks.
174
+ """
175
+ if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
176
+ return self.device
177
+ for module in self.unet.modules():
178
+ if (
179
+ hasattr(module, "_hf_hook")
180
+ and hasattr(module._hf_hook, "execution_device")
181
+ and module._hf_hook.execution_device is not None
182
+ ):
183
+ return torch.device(module._hf_hook.execution_device)
184
+ return self.device
185
+
186
+ @torch.no_grad()
187
+ def __call__(
188
+ self,
189
+ prompt: Union[str, List[str]],
190
+ image: Union[torch.FloatTensor, PIL.Image.Image],
191
+ text: str,
192
+ height: int = 512,
193
+ width: int = 512,
194
+ num_inference_steps: int = 50,
195
+ guidance_scale: float = 7.5,
196
+ negative_prompt: Optional[Union[str, List[str]]] = None,
197
+ num_images_per_prompt: Optional[int] = 1,
198
+ eta: float = 0.0,
199
+ generator: Optional[torch.Generator] = None,
200
+ latents: Optional[torch.FloatTensor] = None,
201
+ output_type: Optional[str] = "pil",
202
+ return_dict: bool = True,
203
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
204
+ callback_steps: Optional[int] = 1,
205
+ **kwargs,
206
+ ):
207
+ r"""
208
+ Function invoked when calling the pipeline for generation.
209
+
210
+ Args:
211
+ prompt (`str` or `List[str]`):
212
+ The prompt or prompts to guide the image generation.
213
+ image (`PIL.Image.Image`):
214
+ `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
215
+ be masked out with `mask_image` and repainted according to `prompt`.
216
+ text (`str``):
217
+ The text to use to generate the mask.
218
+ height (`int`, *optional*, defaults to 512):
219
+ The height in pixels of the generated image.
220
+ width (`int`, *optional*, defaults to 512):
221
+ The width in pixels of the generated image.
222
+ num_inference_steps (`int`, *optional*, defaults to 50):
223
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
224
+ expense of slower inference.
225
+ guidance_scale (`float`, *optional*, defaults to 7.5):
226
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
227
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
228
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
229
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
230
+ usually at the expense of lower image quality.
231
+ negative_prompt (`str` or `List[str]`, *optional*):
232
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
233
+ if `guidance_scale` is less than `1`).
234
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
235
+ The number of images to generate per prompt.
236
+ eta (`float`, *optional*, defaults to 0.0):
237
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
238
+ [`schedulers.DDIMScheduler`], will be ignored for others.
239
+ generator (`torch.Generator`, *optional*):
240
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
241
+ deterministic.
242
+ latents (`torch.FloatTensor`, *optional*):
243
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
244
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
245
+ tensor will ge generated by sampling using the supplied random `generator`.
246
+ output_type (`str`, *optional*, defaults to `"pil"`):
247
+ The output format of the generate image. Choose between
248
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
249
+ return_dict (`bool`, *optional*, defaults to `True`):
250
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
251
+ plain tuple.
252
+ callback (`Callable`, *optional*):
253
+ A function that will be called every `callback_steps` steps during inference. The function will be
254
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
255
+ callback_steps (`int`, *optional*, defaults to 1):
256
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
257
+ called at every step.
258
+
259
+ Returns:
260
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
261
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
262
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
263
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
264
+ (nsfw) content, according to the `safety_checker`.
265
+ """
266
+
267
+ # We use the input text to generate the mask
268
+ inputs = self.segmentation_processor(
269
+ text=[text], images=[image], padding="max_length", return_tensors="pt"
270
+ ).to(self.device)
271
+ outputs = self.segmentation_model(**inputs)
272
+ mask = torch.sigmoid(outputs.logits).cpu().detach().unsqueeze(-1).numpy()
273
+ mask_pil = self.numpy_to_pil(mask)[0].resize(image.size)
274
+
275
+ # Run inpainting pipeline with the generated mask
276
+ inpainting_pipeline = StableDiffusionInpaintPipeline(
277
+ vae=self.vae,
278
+ text_encoder=self.text_encoder,
279
+ tokenizer=self.tokenizer,
280
+ unet=self.unet,
281
+ scheduler=self.scheduler,
282
+ safety_checker=self.safety_checker,
283
+ feature_extractor=self.feature_extractor,
284
+ )
285
+ return inpainting_pipeline(
286
+ prompt=prompt,
287
+ image=image,
288
+ mask_image=mask_pil,
289
+ height=height,
290
+ width=width,
291
+ num_inference_steps=num_inference_steps,
292
+ guidance_scale=guidance_scale,
293
+ negative_prompt=negative_prompt,
294
+ num_images_per_prompt=num_images_per_prompt,
295
+ eta=eta,
296
+ generator=generator,
297
+ latents=latents,
298
+ output_type=output_type,
299
+ return_dict=return_dict,
300
+ callback=callback,
301
+ callback_steps=callback_steps,
302
+ )
v0.11.1/wildcard_stable_diffusion.py ADDED
@@ -0,0 +1,418 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import os
3
+ import random
4
+ import re
5
+ from dataclasses import dataclass
6
+ from typing import Callable, Dict, List, Optional, Union
7
+
8
+ import torch
9
+
10
+ from diffusers.configuration_utils import FrozenDict
11
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
12
+ from diffusers.pipeline_utils import DiffusionPipeline
13
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
14
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
15
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
16
+ from diffusers.utils import deprecate, logging
17
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
18
+
19
+
20
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
21
+
22
+ global_re_wildcard = re.compile(r"__([^_]*)__")
23
+
24
+
25
+ def get_filename(path: str):
26
+ # this doesn't work on Windows
27
+ return os.path.basename(path).split(".txt")[0]
28
+
29
+
30
+ def read_wildcard_values(path: str):
31
+ with open(path, encoding="utf8") as f:
32
+ return f.read().splitlines()
33
+
34
+
35
+ def grab_wildcard_values(wildcard_option_dict: Dict[str, List[str]] = {}, wildcard_files: List[str] = []):
36
+ for wildcard_file in wildcard_files:
37
+ filename = get_filename(wildcard_file)
38
+ read_values = read_wildcard_values(wildcard_file)
39
+ if filename not in wildcard_option_dict:
40
+ wildcard_option_dict[filename] = []
41
+ wildcard_option_dict[filename].extend(read_values)
42
+ return wildcard_option_dict
43
+
44
+
45
+ def replace_prompt_with_wildcards(
46
+ prompt: str, wildcard_option_dict: Dict[str, List[str]] = {}, wildcard_files: List[str] = []
47
+ ):
48
+ new_prompt = prompt
49
+
50
+ # get wildcard options
51
+ wildcard_option_dict = grab_wildcard_values(wildcard_option_dict, wildcard_files)
52
+
53
+ for m in global_re_wildcard.finditer(new_prompt):
54
+ wildcard_value = m.group()
55
+ replace_value = random.choice(wildcard_option_dict[wildcard_value.strip("__")])
56
+ new_prompt = new_prompt.replace(wildcard_value, replace_value, 1)
57
+
58
+ return new_prompt
59
+
60
+
61
+ @dataclass
62
+ class WildcardStableDiffusionOutput(StableDiffusionPipelineOutput):
63
+ prompts: List[str]
64
+
65
+
66
+ class WildcardStableDiffusionPipeline(DiffusionPipeline):
67
+ r"""
68
+ Example Usage:
69
+ pipe = WildcardStableDiffusionPipeline.from_pretrained(
70
+ "CompVis/stable-diffusion-v1-4",
71
+
72
+ torch_dtype=torch.float16,
73
+ )
74
+ prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
75
+ out = pipe(
76
+ prompt,
77
+ wildcard_option_dict={
78
+ "clothing":["hat", "shirt", "scarf", "beret"]
79
+ },
80
+ wildcard_files=["object.txt", "animal.txt"],
81
+ num_prompt_samples=1
82
+ )
83
+
84
+
85
+ Pipeline for text-to-image generation with wild cards using Stable Diffusion.
86
+
87
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
88
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
89
+
90
+ Args:
91
+ vae ([`AutoencoderKL`]):
92
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
93
+ text_encoder ([`CLIPTextModel`]):
94
+ Frozen text-encoder. Stable Diffusion uses the text portion of
95
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
96
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
97
+ tokenizer (`CLIPTokenizer`):
98
+ Tokenizer of class
99
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
100
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
101
+ scheduler ([`SchedulerMixin`]):
102
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
103
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
104
+ safety_checker ([`StableDiffusionSafetyChecker`]):
105
+ Classification module that estimates whether generated images could be considered offensive or harmful.
106
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
107
+ feature_extractor ([`CLIPFeatureExtractor`]):
108
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
109
+ """
110
+
111
+ def __init__(
112
+ self,
113
+ vae: AutoencoderKL,
114
+ text_encoder: CLIPTextModel,
115
+ tokenizer: CLIPTokenizer,
116
+ unet: UNet2DConditionModel,
117
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
118
+ safety_checker: StableDiffusionSafetyChecker,
119
+ feature_extractor: CLIPFeatureExtractor,
120
+ ):
121
+ super().__init__()
122
+
123
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
124
+ deprecation_message = (
125
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
126
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
127
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
128
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
129
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
130
+ " file"
131
+ )
132
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
133
+ new_config = dict(scheduler.config)
134
+ new_config["steps_offset"] = 1
135
+ scheduler._internal_dict = FrozenDict(new_config)
136
+
137
+ if safety_checker is None:
138
+ logger.warning(
139
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
140
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
141
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
142
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
143
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
144
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
145
+ )
146
+
147
+ self.register_modules(
148
+ vae=vae,
149
+ text_encoder=text_encoder,
150
+ tokenizer=tokenizer,
151
+ unet=unet,
152
+ scheduler=scheduler,
153
+ safety_checker=safety_checker,
154
+ feature_extractor=feature_extractor,
155
+ )
156
+
157
+ @torch.no_grad()
158
+ def __call__(
159
+ self,
160
+ prompt: Union[str, List[str]],
161
+ height: int = 512,
162
+ width: int = 512,
163
+ num_inference_steps: int = 50,
164
+ guidance_scale: float = 7.5,
165
+ negative_prompt: Optional[Union[str, List[str]]] = None,
166
+ num_images_per_prompt: Optional[int] = 1,
167
+ eta: float = 0.0,
168
+ generator: Optional[torch.Generator] = None,
169
+ latents: Optional[torch.FloatTensor] = None,
170
+ output_type: Optional[str] = "pil",
171
+ return_dict: bool = True,
172
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
173
+ callback_steps: Optional[int] = 1,
174
+ wildcard_option_dict: Dict[str, List[str]] = {},
175
+ wildcard_files: List[str] = [],
176
+ num_prompt_samples: Optional[int] = 1,
177
+ **kwargs,
178
+ ):
179
+ r"""
180
+ Function invoked when calling the pipeline for generation.
181
+
182
+ Args:
183
+ prompt (`str` or `List[str]`):
184
+ The prompt or prompts to guide the image generation.
185
+ height (`int`, *optional*, defaults to 512):
186
+ The height in pixels of the generated image.
187
+ width (`int`, *optional*, defaults to 512):
188
+ The width in pixels of the generated image.
189
+ num_inference_steps (`int`, *optional*, defaults to 50):
190
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
191
+ expense of slower inference.
192
+ guidance_scale (`float`, *optional*, defaults to 7.5):
193
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
194
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
195
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
196
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
197
+ usually at the expense of lower image quality.
198
+ negative_prompt (`str` or `List[str]`, *optional*):
199
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
200
+ if `guidance_scale` is less than `1`).
201
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
202
+ The number of images to generate per prompt.
203
+ eta (`float`, *optional*, defaults to 0.0):
204
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
205
+ [`schedulers.DDIMScheduler`], will be ignored for others.
206
+ generator (`torch.Generator`, *optional*):
207
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
208
+ deterministic.
209
+ latents (`torch.FloatTensor`, *optional*):
210
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
211
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
212
+ tensor will ge generated by sampling using the supplied random `generator`.
213
+ output_type (`str`, *optional*, defaults to `"pil"`):
214
+ The output format of the generate image. Choose between
215
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
216
+ return_dict (`bool`, *optional*, defaults to `True`):
217
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
218
+ plain tuple.
219
+ callback (`Callable`, *optional*):
220
+ A function that will be called every `callback_steps` steps during inference. The function will be
221
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
222
+ callback_steps (`int`, *optional*, defaults to 1):
223
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
224
+ called at every step.
225
+ wildcard_option_dict (Dict[str, List[str]]):
226
+ dict with key as `wildcard` and values as a list of possible replacements. For example if a prompt, "A __animal__ sitting on a chair". A wildcard_option_dict can provide possible values for "animal" like this: {"animal":["dog", "cat", "fox"]}
227
+ wildcard_files: (List[str])
228
+ List of filenames of txt files for wildcard replacements. For example if a prompt, "A __animal__ sitting on a chair". A file can be provided ["animal.txt"]
229
+ num_prompt_samples: int
230
+ Number of times to sample wildcards for each prompt provided
231
+
232
+ Returns:
233
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
234
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
235
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
236
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
237
+ (nsfw) content, according to the `safety_checker`.
238
+ """
239
+
240
+ if isinstance(prompt, str):
241
+ prompt = [
242
+ replace_prompt_with_wildcards(prompt, wildcard_option_dict, wildcard_files)
243
+ for i in range(num_prompt_samples)
244
+ ]
245
+ batch_size = len(prompt)
246
+ elif isinstance(prompt, list):
247
+ prompt_list = []
248
+ for p in prompt:
249
+ for i in range(num_prompt_samples):
250
+ prompt_list.append(replace_prompt_with_wildcards(p, wildcard_option_dict, wildcard_files))
251
+ prompt = prompt_list
252
+ batch_size = len(prompt)
253
+ else:
254
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
255
+
256
+ if height % 8 != 0 or width % 8 != 0:
257
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
258
+
259
+ if (callback_steps is None) or (
260
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
261
+ ):
262
+ raise ValueError(
263
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
264
+ f" {type(callback_steps)}."
265
+ )
266
+
267
+ # get prompt text embeddings
268
+ text_inputs = self.tokenizer(
269
+ prompt,
270
+ padding="max_length",
271
+ max_length=self.tokenizer.model_max_length,
272
+ return_tensors="pt",
273
+ )
274
+ text_input_ids = text_inputs.input_ids
275
+
276
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
277
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
278
+ logger.warning(
279
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
280
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
281
+ )
282
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
283
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
284
+
285
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
286
+ bs_embed, seq_len, _ = text_embeddings.shape
287
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
288
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
289
+
290
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
291
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
292
+ # corresponds to doing no classifier free guidance.
293
+ do_classifier_free_guidance = guidance_scale > 1.0
294
+ # get unconditional embeddings for classifier free guidance
295
+ if do_classifier_free_guidance:
296
+ uncond_tokens: List[str]
297
+ if negative_prompt is None:
298
+ uncond_tokens = [""] * batch_size
299
+ elif type(prompt) is not type(negative_prompt):
300
+ raise TypeError(
301
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
302
+ f" {type(prompt)}."
303
+ )
304
+ elif isinstance(negative_prompt, str):
305
+ uncond_tokens = [negative_prompt]
306
+ elif batch_size != len(negative_prompt):
307
+ raise ValueError(
308
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
309
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
310
+ " the batch size of `prompt`."
311
+ )
312
+ else:
313
+ uncond_tokens = negative_prompt
314
+
315
+ max_length = text_input_ids.shape[-1]
316
+ uncond_input = self.tokenizer(
317
+ uncond_tokens,
318
+ padding="max_length",
319
+ max_length=max_length,
320
+ truncation=True,
321
+ return_tensors="pt",
322
+ )
323
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
324
+
325
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
326
+ seq_len = uncond_embeddings.shape[1]
327
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
328
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
329
+
330
+ # For classifier free guidance, we need to do two forward passes.
331
+ # Here we concatenate the unconditional and text embeddings into a single batch
332
+ # to avoid doing two forward passes
333
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
334
+
335
+ # get the initial random noise unless the user supplied it
336
+
337
+ # Unlike in other pipelines, latents need to be generated in the target device
338
+ # for 1-to-1 results reproducibility with the CompVis implementation.
339
+ # However this currently doesn't work in `mps`.
340
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
341
+ latents_dtype = text_embeddings.dtype
342
+ if latents is None:
343
+ if self.device.type == "mps":
344
+ # randn does not exist on mps
345
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
346
+ self.device
347
+ )
348
+ else:
349
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
350
+ else:
351
+ if latents.shape != latents_shape:
352
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
353
+ latents = latents.to(self.device)
354
+
355
+ # set timesteps
356
+ self.scheduler.set_timesteps(num_inference_steps)
357
+
358
+ # Some schedulers like PNDM have timesteps as arrays
359
+ # It's more optimized to move all timesteps to correct device beforehand
360
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
361
+
362
+ # scale the initial noise by the standard deviation required by the scheduler
363
+ latents = latents * self.scheduler.init_noise_sigma
364
+
365
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
366
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
367
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
368
+ # and should be between [0, 1]
369
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
370
+ extra_step_kwargs = {}
371
+ if accepts_eta:
372
+ extra_step_kwargs["eta"] = eta
373
+
374
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
375
+ # expand the latents if we are doing classifier free guidance
376
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
377
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
378
+
379
+ # predict the noise residual
380
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
381
+
382
+ # perform guidance
383
+ if do_classifier_free_guidance:
384
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
385
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
386
+
387
+ # compute the previous noisy sample x_t -> x_t-1
388
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
389
+
390
+ # call the callback, if provided
391
+ if callback is not None and i % callback_steps == 0:
392
+ callback(i, t, latents)
393
+
394
+ latents = 1 / 0.18215 * latents
395
+ image = self.vae.decode(latents).sample
396
+
397
+ image = (image / 2 + 0.5).clamp(0, 1)
398
+
399
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
400
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
401
+
402
+ if self.safety_checker is not None:
403
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
404
+ self.device
405
+ )
406
+ image, has_nsfw_concept = self.safety_checker(
407
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
408
+ )
409
+ else:
410
+ has_nsfw_concept = None
411
+
412
+ if output_type == "pil":
413
+ image = self.numpy_to_pil(image)
414
+
415
+ if not return_dict:
416
+ return (image, has_nsfw_concept)
417
+
418
+ return WildcardStableDiffusionOutput(images=image, nsfw_content_detected=has_nsfw_concept, prompts=prompt)