sayakpaul HF staff commited on
Commit
9b4884a
β€’
1 Parent(s): fd6afc9

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes. Β  See raw diff
Files changed (50) hide show
  1. scrapped_outputs/0012265cfd9e8129835a009e5814c990.txt +96 -0
  2. scrapped_outputs/002e4f7b8ab52c12963857540b2c6ad7.txt +107 -0
  3. scrapped_outputs/00338ebc720885d1d32274136bd7514e.txt +6 -0
  4. scrapped_outputs/003990abb5bccb7515ba047c3f63eebe.txt +96 -0
  5. scrapped_outputs/004595462592973e8bbc3c61f477d432.txt +74 -0
  6. scrapped_outputs/004a80e3475d06e8d1f59f3264b0d35b.txt +215 -0
  7. scrapped_outputs/004c24a7d6387b52ef9a323876ac7239.txt +0 -0
  8. scrapped_outputs/007512d8a5a14389eb3f6aa13d0f082f.txt +255 -0
  9. scrapped_outputs/009a3df3d8ecf57196b920d396c1eb45.txt +215 -0
  10. scrapped_outputs/00a44ba96e48f08abc944973f3de6edb.txt +136 -0
  11. scrapped_outputs/00efdfed25ed505d82383e1aa6f01ddb.txt +0 -0
  12. scrapped_outputs/010878c4f61adff57a313b69bfbf36ee.txt +45 -0
  13. scrapped_outputs/010b61c1b09524892e674b81e6a567e2.txt +8 -0
  14. scrapped_outputs/013e30f4683bc1e82d2b6b2027109bad.txt +11 -0
  15. scrapped_outputs/014fb36531fe935112c5eaa247063735.txt +163 -0
  16. scrapped_outputs/01a8586bc0784a4627557a3815ff5b5d.txt +100 -0
  17. scrapped_outputs/01be2bbed29849c60e5daa8454e05de7.txt +286 -0
  18. scrapped_outputs/01d80081236d3aed18b8ca7aabd28034.txt +18 -0
  19. scrapped_outputs/01df407ddd0ca5935cbb0f71822a1c38.txt +83 -0
  20. scrapped_outputs/0247f496918051ff626a635f40c86068.txt +217 -0
  21. scrapped_outputs/024b6d495f66ffbe96d4b6dc2553b492.txt +260 -0
  22. scrapped_outputs/029a71d92796bdac8ab84604964508c7.txt +53 -0
  23. scrapped_outputs/02a8a2246909676ce154902d0be79029.txt +0 -0
  24. scrapped_outputs/02aee9759affa29fb25ab0383cbb3c8d.txt +138 -0
  25. scrapped_outputs/02bd848b35977a9c9f00ad003cb069ef.txt +48 -0
  26. scrapped_outputs/031de0c7e6fbc268b733b53d76fd629b.txt +58 -0
  27. scrapped_outputs/0337e3a463f82d01341bcedbe24ef622.txt +217 -0
  28. scrapped_outputs/0355b252e25654dc434b0da048d15629.txt +56 -0
  29. scrapped_outputs/035d2eb81551ae17f2f6548c483bb4ce.txt +61 -0
  30. scrapped_outputs/037a312aaecccf6bc6297a4be6c94e34.txt +107 -0
  31. scrapped_outputs/039174a093290e2204530344edb27be3.txt +265 -0
  32. scrapped_outputs/03a8acbaedc64b38f5af066e6bbee2e3.txt +10 -0
  33. scrapped_outputs/041d6ec5bc898d377b96ad1c3e5ce22b.txt +1 -0
  34. scrapped_outputs/04343d970e3a9bf96cf88b007a727277.txt +17 -0
  35. scrapped_outputs/044358532f240b4e1a89ecfcec43efdc.txt +1 -0
  36. scrapped_outputs/04532fa8bf4664942bca163e9ce7d3af.txt +18 -0
  37. scrapped_outputs/04863d9d6a0a778c9d89bfaf5c722799.txt +58 -0
  38. scrapped_outputs/04a5c43352cba1852d9743227a5502ec.txt +11 -0
  39. scrapped_outputs/04b6c971d3b3042cb398245d60d142af.txt +50 -0
  40. scrapped_outputs/0513b0801d8c780910edb8268d9b7b3b.txt +1 -0
  41. scrapped_outputs/05377f15590571c32cefbc2656f68eeb.txt +137 -0
  42. scrapped_outputs/05582e67bfcec7fa9b41e4219522b5e8.txt +75 -0
  43. scrapped_outputs/0563c13a7c1c4c7bf534f8ba98328463.txt +66 -0
  44. scrapped_outputs/056988b6242e71f9baa34a0128b3b910.txt +61 -0
  45. scrapped_outputs/0571ee854112d412f8b230bbf015c40b.txt +0 -0
  46. scrapped_outputs/0589ba813ef6923277cca7ee6b454f67.txt +138 -0
  47. scrapped_outputs/05b0f824d9e6de69327504f27e90b9e6.txt +0 -0
  48. scrapped_outputs/05cb598c3dda9e4d07cb0d08b8e89e80.txt +0 -0
  49. scrapped_outputs/05fc9a1b7b04cc46e3de44a240e518af.txt +40 -0
  50. scrapped_outputs/060ba29d724ef0efe0746d1279958f67.txt +24 -0
scrapped_outputs/0012265cfd9e8129835a009e5814c990.txt ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ I2VGen-XL I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models by Shiwei Zhang, Jiayu Wang, Yingya Zhang, Kang Zhao, Hangjie Yuan, Zhiwu Qin, Xiang Wang, Deli Zhao, and Jingren Zhou. The abstract from the paper is: Video synthesis has recently made remarkable strides benefiting from the rapid development of diffusion models. However, it still encounters challenges in terms of semantic accuracy, clarity and spatio-temporal continuity. They primarily arise from the scarcity of well-aligned text-video data and the complex inherent structure of videos, making it difficult for the model to simultaneously ensure semantic and qualitative excellence. In this report, we propose a cascaded I2VGen-XL approach that enhances model performance by decoupling these two factors and ensures the alignment of the input data by utilizing static images as a form of crucial guidance. I2VGen-XL consists of two stages: i) the base stage guarantees coherent semantics and preserves content from input images by using two hierarchical encoders, and ii) the refinement stage enhances the video’s details by incorporating an additional brief text and improves the resolution to 1280Γ—720. To improve the diversity, we collect around 35 million single-shot text-video pairs and 6 billion text-image pairs to optimize the model. By this means, I2VGen-XL can simultaneously enhance the semantic accuracy, continuity of details and clarity of generated videos. Through extensive experiments, we have investigated the underlying principles of I2VGen-XL and compared it with current top methods, which can demonstrate its effectiveness on diverse data. The source code and models will be publicly available at this https URL. The original codebase can be found here. The model checkpoints can be found here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Also, to know more about reducing the memory usage of this pipeline, refer to the [β€œReduce memory usage”] section here. Sample output with I2VGenXL: masterpiece, bestquality, sunset.
2
+ Notes I2VGenXL always uses a clip_skip value of 1. This means it leverages the penultimate layer representations from the text encoder of CLIP. It can generate videos of quality that is often on par with Stable Video Diffusion (SVD). Unlike SVD, it additionally accepts text prompts as inputs. It can generate higher resolution videos. When using the DDIMScheduler (which is default for this pipeline), less than 50 steps for inference leads to bad results. I2VGenXLPipeline class diffusers.I2VGenXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer image_encoder: CLIPVisionModelWithProjection feature_extractor: CLIPImageProcessor unet: I2VGenXLUNet scheduler: DDIMScheduler ) Parameters vae (AutoencoderKL) β€”
3
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
4
+ Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
5
+ A CLIPTokenizer to tokenize text. unet (I2VGenXLUNet) β€”
6
+ A I2VGenXLUNet to denoise the encoded video latents. scheduler (DDIMScheduler) β€”
7
+ A scheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for image-to-video generation as proposed in I2VGenXL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
8
+ implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = 704 width: Optional = 1280 target_fps: Optional = 16 num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 num_videos_per_prompt: Optional = 1 decode_chunk_size: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = 1 ) β†’ pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) β€”
9
+ The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) β€”
10
+ Image or images to guide image generation. If you provide a tensor, it needs to be compatible with
11
+ CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
12
+ The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
13
+ The width in pixels of the generated image. target_fps (int, optional) β€”
14
+ Frames per second. The rate at which the generated images shall be exported to a video after generation. This is also used as a β€œmicro-condition” while generation. num_frames (int, optional) β€”
15
+ The number of video frames to generate. num_inference_steps (int, optional) β€”
16
+ The number of denoising steps. guidance_scale (float, optional, defaults to 7.5) β€”
17
+ A higher guidance scale value encourages the model to generate images closely linked to the text
18
+ prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β€”
19
+ The prompt or prompts to guide what to not include in image generation. If not defined, you need to
20
+ pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional) β€”
21
+ Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
22
+ to the DDIMScheduler, and is ignored in other schedulers. num_videos_per_prompt (int, optional) β€”
23
+ The number of images to generate per prompt. decode_chunk_size (int, optional) β€”
24
+ The number of frames to decode at a time. The higher the chunk size, the higher the temporal consistency
25
+ between frames, but also the higher the memory consumption. By default, the decoder will decode all frames at once
26
+ for maximal quality. Reduce decode_chunk_size to reduce memory usage. generator (torch.Generator or List[torch.Generator], optional) β€”
27
+ A torch.Generator to make
28
+ generation deterministic. latents (torch.FloatTensor, optional) β€”
29
+ Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
30
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
31
+ tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) β€”
32
+ Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
33
+ provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
34
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
35
+ not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") β€”
36
+ The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
37
+ Whether or not to return a StableDiffusionPipelineOutput instead of a
38
+ plain tuple. cross_attention_kwargs (dict, optional) β€”
39
+ A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
40
+ self.processor. clip_skip (int, optional) β€”
41
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
42
+ the output of the pre-final layer will be used for computing the prompt embeddings. Returns
43
+ pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple
44
+
45
+ If return_dict is True, pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput is
46
+ returned, otherwise a tuple is returned where the first element is a list with the generated frames.
47
+ The call function to the pipeline for image-to-video generation with I2VGenXLPipeline. Examples: Copied >>> import torch
48
+ >>> from diffusers import I2VGenXLPipeline
49
+
50
+ >>> pipeline = I2VGenXLPipeline.from_pretrained("ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16")
51
+ >>> pipeline.enable_model_cpu_offload()
52
+
53
+ >>> image_url = "https://github.com/ali-vilab/i2vgen-xl/blob/main/data/test_images/img_0009.png?raw=true"
54
+ >>> image = load_image(image_url).convert("RGB")
55
+
56
+ >>> prompt = "Papers were floating in the air on a table in the library"
57
+ >>> negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms"
58
+ >>> generator = torch.manual_seed(8888)
59
+
60
+ >>> frames = pipeline(
61
+ ... prompt=prompt,
62
+ ... image=image,
63
+ ... num_inference_steps=50,
64
+ ... negative_prompt=negative_prompt,
65
+ ... guidance_scale=9.0,
66
+ ... generator=generator
67
+ ... ).frames[0]
68
+ >>> video_path = export_to_gif(frames, "i2v.gif") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to
69
+ computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to
70
+ computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) β€”
71
+ Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
72
+ mitigate β€œoversmoothing effect” in the enhanced denoising process. s2 (float) β€”
73
+ Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
74
+ mitigate β€œoversmoothing effect” in the enhanced denoising process. b1 (float) β€” Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) β€” Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values
75
+ that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
76
+ compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
77
+ compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
78
+ processing larger images. encode_prompt < source > ( prompt device num_videos_per_prompt negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) β€”
79
+ prompt to be encoded
80
+ device β€” (torch.device):
81
+ torch device num_videos_per_prompt (int) β€”
82
+ number of images that should be generated per prompt do_classifier_free_guidance (bool) β€”
83
+ whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β€”
84
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
85
+ negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
86
+ less than 1). prompt_embeds (torch.FloatTensor, optional) β€”
87
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
88
+ provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
89
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
90
+ weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
91
+ argument. lora_scale (float, optional) β€”
92
+ A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β€”
93
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
94
+ the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. I2VGenXLPipelineOutput class diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput < source > ( frames: Union ) Parameters frames (List[np.ndarray] or torch.FloatTensor) β€”
95
+ List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels) or as
96
+ a torch tensor. The length of the list denotes the video length (the number of frames). Output class for image-to-video pipeline.
scrapped_outputs/002e4f7b8ab52c12963857540b2c6ad7.txt ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Semantic Guidance Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Text-to-Image Models using Semantic Guidance and provides strong semantic control over image generation.
2
+ Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition. The abstract from the paper is: Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SemanticStableDiffusionPipeline class diffusers.SemanticStableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) β€”
3
+ Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
4
+ Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
5
+ A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β€”
6
+ A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
7
+ A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
8
+ DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (Q16SafetyChecker) β€”
9
+ Classification module that estimates whether generated images could be considered offensive or harmful.
10
+ Please refer to the model card for more details
11
+ about a model’s potential harms. feature_extractor (CLIPImageProcessor) β€”
12
+ A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with latent editing. This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass
13
+ documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular
14
+ device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 editing_prompt: Union = None editing_prompt_embeddings: Optional = None reverse_editing_direction: Union = False edit_guidance_scale: Union = 5 edit_warmup_steps: Union = 10 edit_cooldown_steps: Union = None edit_threshold: Union = 0.9 edit_momentum_scale: Optional = 0.1 edit_mom_beta: Optional = 0.4 edit_weights: Optional = None sem_guidance: Optional = None ) β†’ SemanticStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) β€”
15
+ The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
16
+ The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
17
+ The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) β€”
18
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
19
+ expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
20
+ A higher guidance scale value encourages the model to generate images closely linked to the text
21
+ prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β€”
22
+ The prompt or prompts to guide what to not include in image generation. If not defined, you need to
23
+ pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) β€”
24
+ The number of images to generate per prompt. eta (float, optional, defaults to 0.0) β€”
25
+ Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
26
+ to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β€”
27
+ A torch.Generator to make
28
+ generation deterministic. latents (torch.FloatTensor, optional) β€”
29
+ Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
30
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
31
+ tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") β€”
32
+ The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
33
+ Whether or not to return a StableDiffusionPipelineOutput instead of a
34
+ plain tuple. callback (Callable, optional) β€”
35
+ A function that calls every callback_steps steps during inference. The function is called with the
36
+ following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β€”
37
+ The frequency at which the callback function is called. If not specified, the callback is called at
38
+ every step. editing_prompt (str or List[str], optional) β€”
39
+ The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting
40
+ editing_prompt = None. Guidance direction of prompt should be specified via
41
+ reverse_editing_direction. editing_prompt_embeddings (torch.Tensor, optional) β€”
42
+ Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be
43
+ specified via reverse_editing_direction. reverse_editing_direction (bool or List[bool], optional, defaults to False) β€”
44
+ Whether the corresponding prompt in editing_prompt should be increased or decreased. edit_guidance_scale (float or List[float], optional, defaults to 5) β€”
45
+ Guidance scale for semantic guidance. If provided as a list, values should correspond to
46
+ editing_prompt. edit_warmup_steps (float or List[float], optional, defaults to 10) β€”
47
+ Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is
48
+ calculated for those steps and applied once all warmup periods are over. edit_cooldown_steps (float or List[float], optional, defaults to None) β€”
49
+ Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. edit_threshold (float or List[float], optional, defaults to 0.9) β€”
50
+ Threshold of semantic guidance. edit_momentum_scale (float, optional, defaults to 0.1) β€”
51
+ Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0,
52
+ momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than
53
+ sld_warmup_steps). Momentum is only added to latent guidance once all warmup periods are finished. edit_mom_beta (float, optional, defaults to 0.4) β€”
54
+ Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous
55
+ momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than
56
+ edit_warmup_steps). edit_weights (List[float], optional, defaults to None) β€”
57
+ Indicates how much each individual concept should influence the overall guidance. If no weights are
58
+ provided all concepts are applied equally. sem_guidance (List[torch.Tensor], optional) β€”
59
+ List of pre-generated guidance vectors to be applied at generation. Length of the list has to
60
+ correspond to num_inference_steps. Returns
61
+ SemanticStableDiffusionPipelineOutput or tuple
62
+
63
+ If return_dict is True,
64
+ SemanticStableDiffusionPipelineOutput is returned, otherwise a
65
+ tuple is returned where the first element is a list with the generated images and the second element
66
+ is a list of bools indicating whether the corresponding generated image contains β€œnot-safe-for-work”
67
+ (nsfw) content.
68
+ The call function to the pipeline for generation. Examples: Copied >>> import torch
69
+ >>> from diffusers import SemanticStableDiffusionPipeline
70
+
71
+ >>> pipe = SemanticStableDiffusionPipeline.from_pretrained(
72
+ ... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16
73
+ ... )
74
+ >>> pipe = pipe.to("cuda")
75
+
76
+ >>> out = pipe(
77
+ ... prompt="a photo of the face of a woman",
78
+ ... num_images_per_prompt=1,
79
+ ... guidance_scale=7,
80
+ ... editing_prompt=[
81
+ ... "smiling, smile", # Concepts to apply
82
+ ... "glasses, wearing glasses",
83
+ ... "curls, wavy hair, curly hair",
84
+ ... "beard, full beard, mustache",
85
+ ... ],
86
+ ... reverse_editing_direction=[
87
+ ... False,
88
+ ... False,
89
+ ... False,
90
+ ... False,
91
+ ... ], # Direction of guidance i.e. increase all concepts
92
+ ... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept
93
+ ... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept
94
+ ... edit_threshold=[
95
+ ... 0.99,
96
+ ... 0.975,
97
+ ... 0.925,
98
+ ... 0.96,
99
+ ... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions
100
+ ... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance
101
+ ... edit_mom_beta=0.6, # Momentum beta
102
+ ... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other
103
+ ... )
104
+ >>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) β€”
105
+ List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) β€”
106
+ List indicating whether the corresponding generated image contains β€œnot-safe-for-work” (nsfw) content or
107
+ None if safety checking could not be performed. Output class for Stable Diffusion pipelines.
scrapped_outputs/00338ebc720885d1d32274136bd7514e.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ Overview πŸ€— Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in diffusers/examples. Each training script is: Self-contained: the training script does not depend on any local files, and all packages required to run the script are installed from the requirements.txt file. Easy-to-tweak: the training scripts are an example of how to train a diffusion model for a specific task and won’t work out-of-the-box for every training scenario. You’ll likely need to adapt the training script for your specific use-case. To help you with that, we’ve fully exposed the data preprocessing code and the training loop so you can modify it for your own use. Beginner-friendly: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out. Single-purpose: each training script is expressly designed for only one task to keep it readable and understandable. Our current collection of training scripts include: Training SDXL-support LoRA-support Flax-support unconditional image generation text-to-image πŸ‘ πŸ‘ πŸ‘ textual inversion πŸ‘ DreamBooth πŸ‘ πŸ‘ πŸ‘ ControlNet πŸ‘ πŸ‘ InstructPix2Pix πŸ‘ Custom Diffusion T2I-Adapters πŸ‘ Kandinsky 2.2 πŸ‘ Wuerstchen πŸ‘ These examples are actively maintained, so please feel free to open an issue if they aren’t working as expected. If you feel like another training example should be included, you’re more than welcome to start a Feature Request to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose. Install Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment: Copied git clone https://github.com/huggingface/diffusers
2
+ cd diffusers
3
+ pip install . Then navigate to the folder of the training script (for example, DreamBooth) and install the requirements.txt file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you’re using one of these scripts, make sure you install its corresponding requirements file. Copied cd examples/dreambooth
4
+ pip install -r requirements.txt
5
+ # to train SDXL with DreamBooth
6
+ pip install -r requirements_sdxl.txt To speedup training and reduce memory-usage, we recommend: using PyTorch 2.0 or higher to automatically use scaled dot product attention during training (you don’t need to make any changes to the training code) installing xFormers to enable memory-efficient attention
scrapped_outputs/003990abb5bccb7515ba047c3f63eebe.txt ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ DPMSolverMultistepScheduler DPMSolverMultistep is a multistep scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality
2
+ samples, and it can generate quite good samples even in 10 steps. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space
3
+ diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic
4
+ thresholding. This thresholding method is unsuitable for latent-space diffusion models such as
5
+ Stable Diffusion. The SDE variant of DPMSolver and DPM-Solver++ is also supported, but only for the first and second-order solvers. This is a fast SDE solver for the reverse diffusion SDE. It is recommended to use the second-order sde-dpmsolver++. DPMSolverMultistepScheduler class diffusers.DPMSolverMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False use_lu_lambdas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) β€”
6
+ The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) β€”
7
+ The starting beta value of inference. beta_end (float, defaults to 0.02) β€”
8
+ The final beta value. beta_schedule (str, defaults to "linear") β€”
9
+ The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
10
+ linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) β€”
11
+ Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) β€”
12
+ The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided
13
+ sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) β€”
14
+ Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process),
15
+ sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen
16
+ Video paper). thresholding (bool, defaults to False) β€”
17
+ Whether to use the β€œdynamic thresholding” method. This is unsuitable for latent-space diffusion models such
18
+ as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) β€”
19
+ The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) β€”
20
+ The threshold value for dynamic thresholding. Valid only when thresholding=True and
21
+ algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) β€”
22
+ Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The
23
+ dpmsolver type implements the algorithms in the DPMSolver
24
+ paper, and the dpmsolver++ type implements the algorithms in the
25
+ DPMSolver++ paper. It is recommended to use dpmsolver++ or
26
+ sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) β€”
27
+ Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the
28
+ sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) β€”
29
+ Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can
30
+ stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) β€”
31
+ Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail
32
+ richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference
33
+ steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) β€”
34
+ Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True,
35
+ the sigmas are determined according to a sequence of noise levels {Οƒi}. use_lu_lambdas (bool, optional, defaults to False) β€”
36
+ Whether to use the uniform-logSNR for step sizes proposed by Lu’s DPM-Solver in the noise schedule during
37
+ the sampling process. If True, the sigmas and time steps are determined according to a sequence of
38
+ lambda(t). lambda_min_clipped (float, defaults to -inf) β€”
39
+ Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the
40
+ cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) β€”
41
+ Set to β€œlearned” or β€œlearned_range” for diffusion models that predict variance. If set, the model’s output
42
+ contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") β€”
43
+ The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and
44
+ Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) β€”
45
+ An offset added to the inference steps. You can use a combination of offset=1 and
46
+ set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable
47
+ Diffusion. DPMSolverMultistepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic
48
+ methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) β†’ torch.FloatTensor Parameters model_output (torch.FloatTensor) β€”
49
+ The direct output from the learned diffusion model. sample (torch.FloatTensor) β€”
50
+ A current instance of a sample created by the diffusion process. Returns
51
+ torch.FloatTensor
52
+
53
+ The converted model output.
54
+ Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is
55
+ designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an
56
+ integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise
57
+ prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) β†’ torch.FloatTensor Parameters model_output (torch.FloatTensor) β€”
58
+ The direct output from the learned diffusion model. sample (torch.FloatTensor) β€”
59
+ A current instance of a sample created by the diffusion process. Returns
60
+ torch.FloatTensor
61
+
62
+ The sample tensor at the previous timestep.
63
+ One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) β†’ torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) β€”
64
+ The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) β€”
65
+ A current instance of a sample created by the diffusion process. Returns
66
+ torch.FloatTensor
67
+
68
+ The sample tensor at the previous timestep.
69
+ One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) β†’ torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) β€”
70
+ The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) β€”
71
+ A current instance of a sample created by diffusion process. Returns
72
+ torch.FloatTensor
73
+
74
+ The sample tensor at the previous timestep.
75
+ One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) β†’ torch.FloatTensor Parameters sample (torch.FloatTensor) β€”
76
+ The input sample. Returns
77
+ torch.FloatTensor
78
+
79
+ A scaled input sample.
80
+ Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
81
+ current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) β€”
82
+ The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) β€”
83
+ The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) β†’ SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β€”
84
+ The direct output from learned diffusion model. timestep (int) β€”
85
+ The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β€”
86
+ A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) β€”
87
+ A random number generator. return_dict (bool) β€”
88
+ Whether or not to return a SchedulerOutput or tuple. Returns
89
+ SchedulerOutput or tuple
90
+
91
+ If return_dict is True, SchedulerOutput is returned, otherwise a
92
+ tuple is returned where the first element is the sample tensor.
93
+ Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
94
+ the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€”
95
+ Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the
96
+ denoising loop. Base class for the output of a scheduler’s step function.
scrapped_outputs/004595462592973e8bbc3c61f477d432.txt ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ DDIMScheduler Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample.
2
+ To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models
3
+ with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process.
4
+ We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from.
5
+ We empirically demonstrate that DDIMs can produce high quality samples 10Γ— to 50Γ— faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase of this paper can be found at ermongroup/ddim, and you can contact the author on tsong.me. Tips The paper Common Diffusion Noise Schedules and Sample Steps are Flawed claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose: πŸ§ͺ This is an experimental feature! rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR) Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) train a model with v_prediction (add the following argument to the train_text_to_image.py or train_text_to_image_lora.py scripts) Copied --prediction_type="v_prediction" change the sampler to always start from the last timestep Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") rescale classifier-free guidance to prevent over-exposure Copied image = pipe(prompt, guidance_rescale=0.7).images[0] For example: Copied from diffusers import DiffusionPipeline, DDIMScheduler
6
+ import torch
7
+
8
+ pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16)
9
+ pipe.scheduler = DDIMScheduler.from_config(
10
+ pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
11
+ )
12
+ pipe.to("cuda")
13
+
14
+ prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"
15
+ image = pipe(prompt, guidance_rescale=0.7).images[0]
16
+ image DDIMScheduler class diffusers.DDIMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) β€”
17
+ The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) β€”
18
+ The starting beta value of inference. beta_end (float, defaults to 0.02) β€”
19
+ The final beta value. beta_schedule (str, defaults to "linear") β€”
20
+ The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
21
+ linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) β€”
22
+ Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) β€”
23
+ Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) β€”
24
+ The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) β€”
25
+ Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
26
+ there is no previous alpha. When this option is True the previous alpha product is fixed to 1,
27
+ otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) β€”
28
+ An offset added to the inference steps. You can use a combination of offset=1 and
29
+ set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable
30
+ Diffusion. prediction_type (str, defaults to epsilon, optional) β€”
31
+ Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process),
32
+ sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen
33
+ Video paper). thresholding (bool, defaults to False) β€”
34
+ Whether to use the β€œdynamic thresholding” method. This is unsuitable for latent-space diffusion models such
35
+ as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) β€”
36
+ The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) β€”
37
+ The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") β€”
38
+ The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and
39
+ Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) β€”
40
+ Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
41
+ dark samples instead of limiting it to samples with medium brightness. Loosely related to
42
+ --offset_noise. DDIMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with
43
+ non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic
44
+ methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) β†’ torch.FloatTensor Parameters sample (torch.FloatTensor) β€”
45
+ The input sample. timestep (int, optional) β€”
46
+ The current timestep in the diffusion chain. Returns
47
+ torch.FloatTensor
48
+
49
+ A scaled input sample.
50
+ Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
51
+ current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) β€”
52
+ The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor eta: float = 0.0 use_clipped_model_output: bool = False generator = None variance_noise: Optional = None return_dict: bool = True ) β†’ ~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β€”
53
+ The direct output from learned diffusion model. timestep (float) β€”
54
+ The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β€”
55
+ A current instance of a sample created by the diffusion process. eta (float) β€”
56
+ The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) β€”
57
+ If True, computes β€œcorrected” model_output from the clipped predicted original sample. Necessary
58
+ because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no
59
+ clipping has happened, β€œcorrected” model_output would coincide with the one provided as input and
60
+ use_clipped_model_output has no effect. generator (torch.Generator, optional) β€”
61
+ A random number generator. variance_noise (torch.FloatTensor) β€”
62
+ Alternative to generating noise with generator by directly providing the noise for the variance
63
+ itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) β€”
64
+ Whether or not to return a DDIMSchedulerOutput or tuple. Returns
65
+ ~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple
66
+
67
+ If return_dict is True, DDIMSchedulerOutput is returned, otherwise a
68
+ tuple is returned where the first element is the sample tensor.
69
+ Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
70
+ process from the learned model outputs (most often the predicted noise). DDIMSchedulerOutput class diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€”
71
+ Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the
72
+ denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€”
73
+ The predicted denoised sample (x_{0}) based on the model output from the current timestep.
74
+ pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output.
scrapped_outputs/004a80e3475d06e8d1f59f3264b0d35b.txt ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Text-to-Video Generation with AnimateDiff Overview AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning by Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai. The abstract of the paper is the following: With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at this https URL. Available Pipelines Pipeline Tasks Demo AnimateDiffPipeline Text-to-Video Generation with AnimateDiff Available checkpoints Motion Adapter checkpoints can be found under guoyww. These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5. Usage example AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet. The following example demonstrates how to use a MotionAdapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. Copied import torch
2
+ from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter
3
+ from diffusers.utils import export_to_gif
4
+
5
+ # Load the motion adapter
6
+ adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)
7
+ # load SD 1.5 based finetuned model
8
+ model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
9
+ pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16)
10
+ scheduler = DDIMScheduler.from_pretrained(
11
+ model_id,
12
+ subfolder="scheduler",
13
+ clip_sample=False,
14
+ timestep_spacing="linspace",
15
+ beta_schedule="linear",
16
+ steps_offset=1,
17
+ )
18
+ pipe.scheduler = scheduler
19
+
20
+ # enable memory savings
21
+ pipe.enable_vae_slicing()
22
+ pipe.enable_model_cpu_offload()
23
+
24
+ output = pipe(
25
+ prompt=(
26
+ "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
27
+ "orange sky, warm lighting, fishing boats, ocean waves seagulls, "
28
+ "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
29
+ "golden hour, coastal landscape, seaside scenery"
30
+ ),
31
+ negative_prompt="bad quality, worse quality",
32
+ num_frames=16,
33
+ guidance_scale=7.5,
34
+ num_inference_steps=25,
35
+ generator=torch.Generator("cpu").manual_seed(42),
36
+ )
37
+ frames = output.frames[0]
38
+ export_to_gif(frames, "animation.gif")
39
+ Here are some sample outputs: masterpiece, bestquality, sunset.
40
+ AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the AnimateDiff checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. Using Motion LoRAs Motion LoRAs are a collection of LoRAs that work with the guoyww/animatediff-motion-adapter-v1-5-2 checkpoint. These LoRAs are responsible for adding specific types of motion to the animations. Copied import torch
41
+ from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter
42
+ from diffusers.utils import export_to_gif
43
+
44
+ # Load the motion adapter
45
+ adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)
46
+ # load SD 1.5 based finetuned model
47
+ model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
48
+ pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16)
49
+ pipe.load_lora_weights(
50
+ "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out"
51
+ )
52
+
53
+ scheduler = DDIMScheduler.from_pretrained(
54
+ model_id,
55
+ subfolder="scheduler",
56
+ clip_sample=False,
57
+ beta_schedule="linear",
58
+ timestep_spacing="linspace",
59
+ steps_offset=1,
60
+ )
61
+ pipe.scheduler = scheduler
62
+
63
+ # enable memory savings
64
+ pipe.enable_vae_slicing()
65
+ pipe.enable_model_cpu_offload()
66
+
67
+ output = pipe(
68
+ prompt=(
69
+ "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
70
+ "orange sky, warm lighting, fishing boats, ocean waves seagulls, "
71
+ "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
72
+ "golden hour, coastal landscape, seaside scenery"
73
+ ),
74
+ negative_prompt="bad quality, worse quality",
75
+ num_frames=16,
76
+ guidance_scale=7.5,
77
+ num_inference_steps=25,
78
+ generator=torch.Generator("cpu").manual_seed(42),
79
+ )
80
+ frames = output.frames[0]
81
+ export_to_gif(frames, "animation.gif")
82
+ masterpiece, bestquality, sunset.
83
+ Using Motion LoRAs with PEFT You can also leverage the PEFT backend to combine Motion LoRA’s and create more complex animations. First install PEFT with Copied pip install peft Then you can use the following code to combine Motion LoRAs. Copied import torch
84
+ from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter
85
+ from diffusers.utils import export_to_gif
86
+
87
+ # Load the motion adapter
88
+ adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)
89
+ # load SD 1.5 based finetuned model
90
+ model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
91
+ pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16)
92
+
93
+ pipe.load_lora_weights(
94
+ "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out",
95
+ )
96
+ pipe.load_lora_weights(
97
+ "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left",
98
+ )
99
+ pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0])
100
+
101
+ scheduler = DDIMScheduler.from_pretrained(
102
+ model_id,
103
+ subfolder="scheduler",
104
+ clip_sample=False,
105
+ timestep_spacing="linspace",
106
+ beta_schedule="linear",
107
+ steps_offset=1,
108
+ )
109
+ pipe.scheduler = scheduler
110
+
111
+ # enable memory savings
112
+ pipe.enable_vae_slicing()
113
+ pipe.enable_model_cpu_offload()
114
+
115
+ output = pipe(
116
+ prompt=(
117
+ "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
118
+ "orange sky, warm lighting, fishing boats, ocean waves seagulls, "
119
+ "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
120
+ "golden hour, coastal landscape, seaside scenery"
121
+ ),
122
+ negative_prompt="bad quality, worse quality",
123
+ num_frames=16,
124
+ guidance_scale=7.5,
125
+ num_inference_steps=25,
126
+ generator=torch.Generator("cpu").manual_seed(42),
127
+ )
128
+ frames = output.frames[0]
129
+ export_to_gif(frames, "animation.gif")
130
+ masterpiece, bestquality, sunset.
131
+ Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AnimateDiffPipeline class diffusers.AnimateDiffPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) β€”
132
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
133
+ Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
134
+ A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β€”
135
+ A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) β€”
136
+ A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) β€”
137
+ A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
138
+ DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
139
+ implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) β†’ TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) β€”
140
+ The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
141
+ The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
142
+ The width in pixels of the generated video. num_frames (int, optional, defaults to 16) β€”
143
+ The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
144
+ amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) β€”
145
+ The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
146
+ expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
147
+ A higher guidance scale value encourages the model to generate images closely linked to the text
148
+ prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β€”
149
+ The prompt or prompts to guide what to not include in image generation. If not defined, you need to
150
+ pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) β€”
151
+ Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
152
+ to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β€”
153
+ A torch.Generator to make
154
+ generation deterministic. latents (torch.FloatTensor, optional) β€”
155
+ Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
156
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
157
+ tensor is generated by sampling using the supplied random generator. Latents should be of shape
158
+ (batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) β€”
159
+ Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
160
+ provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
161
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
162
+ not provided, negative_prompt_embeds are generated from the negative_prompt input argument.
163
+ ip_adapter_image β€” (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") β€”
164
+ The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or
165
+ np.array. return_dict (bool, optional, defaults to True) β€”
166
+ Whether or not to return a TextToVideoSDPipelineOutput instead
167
+ of a plain tuple. callback (Callable, optional) β€”
168
+ A function that calls every callback_steps steps during inference. The function is called with the
169
+ following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β€”
170
+ The frequency at which the callback function is called. If not specified, the callback is called at
171
+ every step. cross_attention_kwargs (dict, optional) β€”
172
+ A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
173
+ self.processor. clip_skip (int, optional) β€”
174
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
175
+ the output of the pre-final layer will be used for computing the prompt embeddings. Returns
176
+ TextToVideoSDPipelineOutput or tuple
177
+
178
+ If return_dict is True, TextToVideoSDPipelineOutput is
179
+ returned, otherwise a tuple is returned where the first element is a list with the generated frames.
180
+ The call function to the pipeline for generation. Examples: Copied >>> import torch
181
+ >>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler
182
+ >>> from diffusers.utils import export_to_gif
183
+
184
+ >>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2")
185
+ >>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter)
186
+ >>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False)
187
+ >>> output = pipe(prompt="A corgi walking in the park")
188
+ >>> frames = output.frames[0]
189
+ >>> export_to_gif(frames, "animation.gif") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to
190
+ computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to
191
+ computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) β€”
192
+ Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
193
+ mitigate β€œoversmoothing effect” in the enhanced denoising process. s2 (float) β€”
194
+ Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
195
+ mitigate β€œoversmoothing effect” in the enhanced denoising process. b1 (float) β€” Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) β€” Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values
196
+ that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
197
+ compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
198
+ compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
199
+ processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) β€”
200
+ prompt to be encoded
201
+ device β€” (torch.device):
202
+ torch device num_images_per_prompt (int) β€”
203
+ number of images that should be generated per prompt do_classifier_free_guidance (bool) β€”
204
+ whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β€”
205
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
206
+ negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
207
+ less than 1). prompt_embeds (torch.FloatTensor, optional) β€”
208
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
209
+ provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
210
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
211
+ weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
212
+ argument. lora_scale (float, optional) β€”
213
+ A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β€”
214
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
215
+ the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. enable_freeu disable_freeu enable_vae_slicing disable_vae_slicing enable_vae_tiling disable_vae_tiling AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union )
scrapped_outputs/004c24a7d6387b52ef9a323876ac7239.txt ADDED
File without changes
scrapped_outputs/007512d8a5a14389eb3f6aa13d0f082f.txt ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ DiffEdit DiffEdit: Diffusion-based semantic image editing with mask guidance is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. The abstract from the paper is: Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images. The original codebase can be found at Xiang-cd/DiffEdit-stable-diffusion, and you can try it out in this demo. This pipeline was contributed by clarencechen. ❀️ Tips The pipeline can generate masks that can be fed into other inpainting pipelines. In order to generate an image using this pipeline, both an image mask (source and target prompts can be manually specified or generated, and passed to generate_mask())
2
+ and a set of partially inverted latents (generated using invert()) must be provided as arguments when calling the pipeline to generate the final edited image. The function generate_mask() exposes two prompt arguments, source_prompt and target_prompt
3
+ that let you control the locations of the semantic edits in the final image to be generated. Let’s say,
4
+ you wanted to translate from β€œcat” to β€œdog”. In this case, the edit direction will be β€œcat -> dog”. To reflect
5
+ this in the generated mask, you simply have to set the embeddings related to the phrases including β€œcat” to
6
+ source_prompt and β€œdog” to target_prompt. When generating partially inverted latents using invert, assign a caption or text embedding describing the
7
+ overall image to the prompt argument to help guide the inverse latent sampling process. In most cases, the
8
+ source concept is sufficiently descriptive to yield good results, but feel free to explore alternatives. When calling the pipeline to generate the final edited image, assign the source concept to negative_prompt
9
+ and the target concept to prompt. Taking the above example, you simply have to set the embeddings related to
10
+ the phrases including β€œcat” to negative_prompt and β€œdog” to prompt. If you wanted to reverse the direction in the example above, i.e., β€œdog -> cat”, then it’s recommended to:Swap the source_prompt and target_prompt in the arguments to generate_mask. Change the input prompt in invert() to include β€œdog”. Swap the prompt and negative_prompt in the arguments to call the pipeline to generate the final edited image. The source and target prompts, or their corresponding embeddings, can also be automatically generated. Please refer to the DiffEdit guide for more details. StableDiffusionDiffEditPipeline class diffusers.StableDiffusionDiffEditPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor inverse_scheduler: DDIMInverseScheduler requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) β€”
11
+ Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
12
+ Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
13
+ A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β€”
14
+ A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
15
+ A scheduler to be used in combination with unet to denoise the encoded image latents. inverse_scheduler (DDIMInverseScheduler) β€”
16
+ A scheduler to be used in combination with unet to fill in the unmasked part of the input latents. safety_checker (StableDiffusionSafetyChecker) β€”
17
+ Classification module that estimates whether generated images could be considered offensive or harmful.
18
+ Please refer to the model card for more details
19
+ about a model’s potential harms. feature_extractor (CLIPImageProcessor) β€”
20
+ A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. This is an experimental feature! Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
21
+ implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading and saving methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights generate_mask < source > ( image: Union = None target_prompt: Union = None target_negative_prompt: Union = None target_prompt_embeds: Optional = None target_negative_prompt_embeds: Optional = None source_prompt: Union = None source_negative_prompt: Union = None source_prompt_embeds: Optional = None source_negative_prompt_embeds: Optional = None num_maps_per_mask: Optional = 10 mask_encode_strength: Optional = 0.5 mask_thresholding_ratio: Optional = 3.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 generator: Union = None output_type: Optional = 'np' cross_attention_kwargs: Optional = None ) β†’ List[PIL.Image.Image] or np.array Parameters image (PIL.Image.Image) β€”
22
+ Image or tensor representing an image batch to be used for computing the mask. target_prompt (str or List[str], optional) β€”
23
+ The prompt or prompts to guide semantic mask generation. If not defined, you need to pass
24
+ prompt_embeds. target_negative_prompt (str or List[str], optional) β€”
25
+ The prompt or prompts to guide what to not include in image generation. If not defined, you need to
26
+ pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). target_prompt_embeds (torch.FloatTensor, optional) β€”
27
+ Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
28
+ provided, text embeddings are generated from the prompt input argument. target_negative_prompt_embeds (torch.FloatTensor, optional) β€”
29
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
30
+ not provided, negative_prompt_embeds are generated from the negative_prompt input argument. source_prompt (str or List[str], optional) β€”
31
+ The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to
32
+ pass source_prompt_embeds or source_image instead. source_negative_prompt (str or List[str], optional) β€”
33
+ The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you
34
+ need to pass source_negative_prompt_embeds or source_image instead. source_prompt_embeds (torch.FloatTensor, optional) β€”
35
+ Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text
36
+ inputs (prompt weighting). If not provided, text embeddings are generated from source_prompt input
37
+ argument. source_negative_prompt_embeds (torch.FloatTensor, optional) β€”
38
+ Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily
39
+ tweak text inputs (prompt weighting). If not provided, text embeddings are generated from
40
+ source_negative_prompt input argument. num_maps_per_mask (int, optional, defaults to 10) β€”
41
+ The number of noise maps sampled to generate the semantic mask using DiffEdit. mask_encode_strength (float, optional, defaults to 0.5) β€”
42
+ The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0
43
+ and 1. mask_thresholding_ratio (float, optional, defaults to 3.0) β€”
44
+ The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before
45
+ mask binarization. num_inference_steps (int, optional, defaults to 50) β€”
46
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
47
+ expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
48
+ A higher guidance scale value encourages the model to generate images closely linked to the text
49
+ prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator or List[torch.Generator], optional) β€”
50
+ A torch.Generator to make
51
+ generation deterministic. output_type (str, optional, defaults to "pil") β€”
52
+ The output format of the generated image. Choose between PIL.Image or np.array. cross_attention_kwargs (dict, optional) β€”
53
+ A kwargs dictionary that if specified is passed along to the
54
+ AttnProcessor as defined in
55
+ self.processor. Returns
56
+ List[PIL.Image.Image] or np.array
57
+
58
+ When returning a List[PIL.Image.Image], the list consists of a batch of single-channel binary images
59
+ with dimensions (height // self.vae_scale_factor, width // self.vae_scale_factor). If it’s
60
+ np.array, the shape is (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor).
61
+ Generate a latent mask given a mask prompt, a target prompt, and an image. Copied >>> import PIL
62
+ >>> import requests
63
+ >>> import torch
64
+ >>> from io import BytesIO
65
+
66
+ >>> from diffusers import StableDiffusionDiffEditPipeline
67
+
68
+
69
+ >>> def download_image(url):
70
+ ... response = requests.get(url)
71
+ ... return PIL.Image.open(BytesIO(response.content)).convert("RGB")
72
+
73
+
74
+ >>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
75
+
76
+ >>> init_image = download_image(img_url).resize((768, 768))
77
+
78
+ >>> pipe = StableDiffusionDiffEditPipeline.from_pretrained(
79
+ ... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
80
+ ... )
81
+ >>> pipe = pipe.to("cuda")
82
+
83
+ >>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
84
+ >>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
85
+ >>> pipeline.enable_model_cpu_offload()
86
+
87
+ >>> mask_prompt = "A bowl of fruits"
88
+ >>> prompt = "A bowl of pears"
89
+
90
+ >>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt)
91
+ >>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents
92
+ >>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] invert < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 50 inpaint_strength: float = 0.8 guidance_scale: float = 7.5 negative_prompt: Union = None generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None decode_latents: bool = False output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 0 num_auto_corr_rolls: int = 5 ) Parameters prompt (str or List[str], optional) β€”
93
+ The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image) β€”
94
+ Image or tensor representing an image batch to produce the inverted latents guided by prompt. inpaint_strength (float, optional, defaults to 0.8) β€”
95
+ Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When
96
+ inpaint_strength is 1, the inversion process is run for the full number of iterations specified in
97
+ num_inference_steps. image is used as a reference for the inversion process, and adding more noise
98
+ increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) β€”
99
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
100
+ expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
101
+ A higher guidance scale value encourages the model to generate images closely linked to the text
102
+ prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β€”
103
+ The prompt or prompts to guide what to not include in image generation. If not defined, you need to
104
+ pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). generator (torch.Generator, optional) β€”
105
+ A torch.Generator to make
106
+ generation deterministic. prompt_embeds (torch.FloatTensor, optional) β€”
107
+ Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
108
+ provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
109
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
110
+ not provided, negative_prompt_embeds are generated from the negative_prompt input argument. decode_latents (bool, optional, defaults to False) β€”
111
+ Whether or not to decode the inverted latents into a generated image. Setting this argument to True
112
+ decodes all inverted latents for each timestep into a list of generated images. output_type (str, optional, defaults to "pil") β€”
113
+ The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
114
+ Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a
115
+ plain tuple. callback (Callable, optional) β€”
116
+ A function that calls every callback_steps steps during inference. The function is called with the
117
+ following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β€”
118
+ The frequency at which the callback function is called. If not specified, the callback is called at
119
+ every step. cross_attention_kwargs (dict, optional) β€”
120
+ A kwargs dictionary that if specified is passed along to the
121
+ AttnProcessor as defined in
122
+ self.processor. lambda_auto_corr (float, optional, defaults to 20.0) β€”
123
+ Lambda parameter to control auto correction. lambda_kl (float, optional, defaults to 20.0) β€”
124
+ Lambda parameter to control Kullback-Leibler divergence output. num_reg_steps (int, optional, defaults to 0) β€”
125
+ Number of regularization loss steps. num_auto_corr_rolls (int, optional, defaults to 5) β€”
126
+ Number of auto correction roll steps. Generate inverted latents given a prompt and image. Copied >>> import PIL
127
+ >>> import requests
128
+ >>> import torch
129
+ >>> from io import BytesIO
130
+
131
+ >>> from diffusers import StableDiffusionDiffEditPipeline
132
+
133
+
134
+ >>> def download_image(url):
135
+ ... response = requests.get(url)
136
+ ... return PIL.Image.open(BytesIO(response.content)).convert("RGB")
137
+
138
+
139
+ >>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
140
+
141
+ >>> init_image = download_image(img_url).resize((768, 768))
142
+
143
+ >>> pipe = StableDiffusionDiffEditPipeline.from_pretrained(
144
+ ... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
145
+ ... )
146
+ >>> pipe = pipe.to("cuda")
147
+
148
+ >>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
149
+ >>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
150
+ >>> pipeline.enable_model_cpu_offload()
151
+
152
+ >>> prompt = "A bowl of fruits"
153
+
154
+ >>> inverted_latents = pipe.invert(image=init_image, prompt=prompt).latents __call__ < source > ( prompt: Union = None mask_image: Union = None image_latents: Union = None inpaint_strength: Optional = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_ckip: int = None ) β†’ StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) β€”
155
+ The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. mask_image (PIL.Image.Image) β€”
156
+ Image or tensor representing an image batch to mask the generated image. White pixels in the mask are
157
+ repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a
158
+ single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L)
159
+ instead of 3, so the expected shape would be (B, 1, H, W). image_latents (PIL.Image.Image or torch.FloatTensor) β€”
160
+ Partially noised image latents from the inversion process to be used as inputs for image generation. inpaint_strength (float, optional, defaults to 0.8) β€”
161
+ Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the
162
+ denoising process is run on the masked area for the full number of iterations specified in
163
+ num_inference_steps. image_latents is used as a reference for the masked area, and adding more
164
+ noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) β€”
165
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
166
+ expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
167
+ A higher guidance scale value encourages the model to generate images closely linked to the text
168
+ prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β€”
169
+ The prompt or prompts to guide what to not include in image generation. If not defined, you need to
170
+ pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) β€”
171
+ The number of images to generate per prompt. eta (float, optional, defaults to 0.0) β€”
172
+ Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
173
+ to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) β€”
174
+ A torch.Generator to make
175
+ generation deterministic. latents (torch.FloatTensor, optional) β€”
176
+ Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
177
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
178
+ tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) β€”
179
+ Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
180
+ provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
181
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
182
+ not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") β€”
183
+ The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
184
+ Whether or not to return a StableDiffusionPipelineOutput instead of a
185
+ plain tuple. callback (Callable, optional) β€”
186
+ A function that calls every callback_steps steps during inference. The function is called with the
187
+ following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β€”
188
+ The frequency at which the callback function is called. If not specified, the callback is called at
189
+ every step. cross_attention_kwargs (dict, optional) β€”
190
+ A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
191
+ self.processor. clip_skip (int, optional) β€”
192
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
193
+ the output of the pre-final layer will be used for computing the prompt embeddings. Returns
194
+ StableDiffusionPipelineOutput or tuple
195
+
196
+ If return_dict is True, StableDiffusionPipelineOutput is returned,
197
+ otherwise a tuple is returned where the first element is a list with the generated images and the
198
+ second element is a list of bools indicating whether the corresponding generated image contains
199
+ β€œnot-safe-for-work” (nsfw) content.
200
+ The call function to the pipeline for generation. Copied >>> import PIL
201
+ >>> import requests
202
+ >>> import torch
203
+ >>> from io import BytesIO
204
+
205
+ >>> from diffusers import StableDiffusionDiffEditPipeline
206
+
207
+
208
+ >>> def download_image(url):
209
+ ... response = requests.get(url)
210
+ ... return PIL.Image.open(BytesIO(response.content)).convert("RGB")
211
+
212
+
213
+ >>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
214
+
215
+ >>> init_image = download_image(img_url).resize((768, 768))
216
+
217
+ >>> pipe = StableDiffusionDiffEditPipeline.from_pretrained(
218
+ ... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
219
+ ... )
220
+ >>> pipe = pipe.to("cuda")
221
+
222
+ >>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
223
+ >>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
224
+ >>> pipeline.enable_model_cpu_offload()
225
+
226
+ >>> mask_prompt = "A bowl of fruits"
227
+ >>> prompt = "A bowl of pears"
228
+
229
+ >>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt)
230
+ >>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents
231
+ >>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to
232
+ computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to
233
+ computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
234
+ compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
235
+ compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
236
+ processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) β€”
237
+ prompt to be encoded
238
+ device β€” (torch.device):
239
+ torch device num_images_per_prompt (int) β€”
240
+ number of images that should be generated per prompt do_classifier_free_guidance (bool) β€”
241
+ whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β€”
242
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
243
+ negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
244
+ less than 1). prompt_embeds (torch.FloatTensor, optional) β€”
245
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
246
+ provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
247
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
248
+ weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
249
+ argument. lora_scale (float, optional) β€”
250
+ A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β€”
251
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
252
+ the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) β€”
253
+ List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) β€”
254
+ List indicating whether the corresponding generated image contains β€œnot-safe-for-work” (nsfw) content or
255
+ None if safety checking could not be performed. Output class for Stable Diffusion pipelines.
scrapped_outputs/009a3df3d8ecf57196b920d396c1eb45.txt ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Text-to-Video Generation with AnimateDiff Overview AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning by Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai. The abstract of the paper is the following: With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at this https URL. Available Pipelines Pipeline Tasks Demo AnimateDiffPipeline Text-to-Video Generation with AnimateDiff Available checkpoints Motion Adapter checkpoints can be found under guoyww. These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5. Usage example AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet. The following example demonstrates how to use a MotionAdapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. Copied import torch
2
+ from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter
3
+ from diffusers.utils import export_to_gif
4
+
5
+ # Load the motion adapter
6
+ adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)
7
+ # load SD 1.5 based finetuned model
8
+ model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
9
+ pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16)
10
+ scheduler = DDIMScheduler.from_pretrained(
11
+ model_id,
12
+ subfolder="scheduler",
13
+ clip_sample=False,
14
+ timestep_spacing="linspace",
15
+ beta_schedule="linear",
16
+ steps_offset=1,
17
+ )
18
+ pipe.scheduler = scheduler
19
+
20
+ # enable memory savings
21
+ pipe.enable_vae_slicing()
22
+ pipe.enable_model_cpu_offload()
23
+
24
+ output = pipe(
25
+ prompt=(
26
+ "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
27
+ "orange sky, warm lighting, fishing boats, ocean waves seagulls, "
28
+ "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
29
+ "golden hour, coastal landscape, seaside scenery"
30
+ ),
31
+ negative_prompt="bad quality, worse quality",
32
+ num_frames=16,
33
+ guidance_scale=7.5,
34
+ num_inference_steps=25,
35
+ generator=torch.Generator("cpu").manual_seed(42),
36
+ )
37
+ frames = output.frames[0]
38
+ export_to_gif(frames, "animation.gif")
39
+ Here are some sample outputs: masterpiece, bestquality, sunset.
40
+ AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the AnimateDiff checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. Using Motion LoRAs Motion LoRAs are a collection of LoRAs that work with the guoyww/animatediff-motion-adapter-v1-5-2 checkpoint. These LoRAs are responsible for adding specific types of motion to the animations. Copied import torch
41
+ from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter
42
+ from diffusers.utils import export_to_gif
43
+
44
+ # Load the motion adapter
45
+ adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)
46
+ # load SD 1.5 based finetuned model
47
+ model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
48
+ pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16)
49
+ pipe.load_lora_weights(
50
+ "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out"
51
+ )
52
+
53
+ scheduler = DDIMScheduler.from_pretrained(
54
+ model_id,
55
+ subfolder="scheduler",
56
+ clip_sample=False,
57
+ beta_schedule="linear",
58
+ timestep_spacing="linspace",
59
+ steps_offset=1,
60
+ )
61
+ pipe.scheduler = scheduler
62
+
63
+ # enable memory savings
64
+ pipe.enable_vae_slicing()
65
+ pipe.enable_model_cpu_offload()
66
+
67
+ output = pipe(
68
+ prompt=(
69
+ "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
70
+ "orange sky, warm lighting, fishing boats, ocean waves seagulls, "
71
+ "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
72
+ "golden hour, coastal landscape, seaside scenery"
73
+ ),
74
+ negative_prompt="bad quality, worse quality",
75
+ num_frames=16,
76
+ guidance_scale=7.5,
77
+ num_inference_steps=25,
78
+ generator=torch.Generator("cpu").manual_seed(42),
79
+ )
80
+ frames = output.frames[0]
81
+ export_to_gif(frames, "animation.gif")
82
+ masterpiece, bestquality, sunset.
83
+ Using Motion LoRAs with PEFT You can also leverage the PEFT backend to combine Motion LoRA’s and create more complex animations. First install PEFT with Copied pip install peft Then you can use the following code to combine Motion LoRAs. Copied import torch
84
+ from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter
85
+ from diffusers.utils import export_to_gif
86
+
87
+ # Load the motion adapter
88
+ adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)
89
+ # load SD 1.5 based finetuned model
90
+ model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
91
+ pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16)
92
+
93
+ pipe.load_lora_weights(
94
+ "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out",
95
+ )
96
+ pipe.load_lora_weights(
97
+ "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left",
98
+ )
99
+ pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0])
100
+
101
+ scheduler = DDIMScheduler.from_pretrained(
102
+ model_id,
103
+ subfolder="scheduler",
104
+ clip_sample=False,
105
+ timestep_spacing="linspace",
106
+ beta_schedule="linear",
107
+ steps_offset=1,
108
+ )
109
+ pipe.scheduler = scheduler
110
+
111
+ # enable memory savings
112
+ pipe.enable_vae_slicing()
113
+ pipe.enable_model_cpu_offload()
114
+
115
+ output = pipe(
116
+ prompt=(
117
+ "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
118
+ "orange sky, warm lighting, fishing boats, ocean waves seagulls, "
119
+ "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
120
+ "golden hour, coastal landscape, seaside scenery"
121
+ ),
122
+ negative_prompt="bad quality, worse quality",
123
+ num_frames=16,
124
+ guidance_scale=7.5,
125
+ num_inference_steps=25,
126
+ generator=torch.Generator("cpu").manual_seed(42),
127
+ )
128
+ frames = output.frames[0]
129
+ export_to_gif(frames, "animation.gif")
130
+ masterpiece, bestquality, sunset.
131
+ Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AnimateDiffPipeline class diffusers.AnimateDiffPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) β€”
132
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
133
+ Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
134
+ A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β€”
135
+ A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) β€”
136
+ A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) β€”
137
+ A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
138
+ DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
139
+ implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) β†’ TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) β€”
140
+ The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
141
+ The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
142
+ The width in pixels of the generated video. num_frames (int, optional, defaults to 16) β€”
143
+ The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
144
+ amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) β€”
145
+ The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
146
+ expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
147
+ A higher guidance scale value encourages the model to generate images closely linked to the text
148
+ prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β€”
149
+ The prompt or prompts to guide what to not include in image generation. If not defined, you need to
150
+ pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) β€”
151
+ Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
152
+ to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β€”
153
+ A torch.Generator to make
154
+ generation deterministic. latents (torch.FloatTensor, optional) β€”
155
+ Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
156
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
157
+ tensor is generated by sampling using the supplied random generator. Latents should be of shape
158
+ (batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) β€”
159
+ Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
160
+ provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
161
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
162
+ not provided, negative_prompt_embeds are generated from the negative_prompt input argument.
163
+ ip_adapter_image β€” (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") β€”
164
+ The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or
165
+ np.array. return_dict (bool, optional, defaults to True) β€”
166
+ Whether or not to return a TextToVideoSDPipelineOutput instead
167
+ of a plain tuple. callback (Callable, optional) β€”
168
+ A function that calls every callback_steps steps during inference. The function is called with the
169
+ following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β€”
170
+ The frequency at which the callback function is called. If not specified, the callback is called at
171
+ every step. cross_attention_kwargs (dict, optional) β€”
172
+ A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
173
+ self.processor. clip_skip (int, optional) β€”
174
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
175
+ the output of the pre-final layer will be used for computing the prompt embeddings. Returns
176
+ TextToVideoSDPipelineOutput or tuple
177
+
178
+ If return_dict is True, TextToVideoSDPipelineOutput is
179
+ returned, otherwise a tuple is returned where the first element is a list with the generated frames.
180
+ The call function to the pipeline for generation. Examples: Copied >>> import torch
181
+ >>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler
182
+ >>> from diffusers.utils import export_to_gif
183
+
184
+ >>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2")
185
+ >>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter)
186
+ >>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False)
187
+ >>> output = pipe(prompt="A corgi walking in the park")
188
+ >>> frames = output.frames[0]
189
+ >>> export_to_gif(frames, "animation.gif") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to
190
+ computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to
191
+ computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) β€”
192
+ Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
193
+ mitigate β€œoversmoothing effect” in the enhanced denoising process. s2 (float) β€”
194
+ Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
195
+ mitigate β€œoversmoothing effect” in the enhanced denoising process. b1 (float) β€” Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) β€” Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values
196
+ that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
197
+ compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
198
+ compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
199
+ processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) β€”
200
+ prompt to be encoded
201
+ device β€” (torch.device):
202
+ torch device num_images_per_prompt (int) β€”
203
+ number of images that should be generated per prompt do_classifier_free_guidance (bool) β€”
204
+ whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β€”
205
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
206
+ negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
207
+ less than 1). prompt_embeds (torch.FloatTensor, optional) β€”
208
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
209
+ provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
210
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
211
+ weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
212
+ argument. lora_scale (float, optional) β€”
213
+ A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β€”
214
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
215
+ the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. enable_freeu disable_freeu enable_vae_slicing disable_vae_slicing enable_vae_tiling disable_vae_tiling AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union )
scrapped_outputs/00a44ba96e48f08abc944973f3de6edb.txt ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Cycle Diffusion Cycle Diffusion is a text guided image-to-image generation model proposed in Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance by Chen Henry Wu, Fernando De la Torre. The abstract from the paper is: Diffusion models have achieved unprecedented performance in generative modeling. The commonly-adopted formulation of the latent code of diffusion models is a sequence of gradually denoised samples, as opposed to the simpler (e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper provides an alternative, Gaussian formulation of the latent space of various diffusion models, as well as an invertible DPM-Encoder that maps images into the latent space. While our formulation is purely based on the definition of diffusion models, we demonstrate several intriguing consequences. (1) Empirically, we observe that a common latent space emerges from two diffusion models trained independently on related domains. In light of this finding, we propose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image translation. Furthermore, applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors. (2) One can guide pre-trained diffusion models and GANs by controlling the latent codes in a unified, plug-and-play formulation based on energy-based models. Using the CLIP model and a face recognition model as guidance, we demonstrate that diffusion models have better coverage of low-density sub-populations and individuals than GANs. The code is publicly available at this https URL. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. CycleDiffusionPipeline class diffusers.CycleDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) β€”
2
+ Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
3
+ Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
4
+ A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β€”
5
+ A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
6
+ A scheduler to be used in combination with unet to denoise the encoded image latents. Can only be an
7
+ instance of DDIMScheduler. safety_checker (StableDiffusionSafetyChecker) β€”
8
+ Classification module that estimates whether generated images could be considered offensive or harmful.
9
+ Please refer to the model card for more details
10
+ about a model’s potential harms. feature_extractor (CLIPImageProcessor) β€”
11
+ A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image to image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
12
+ implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: typing.Union[str, typing.List[str]] source_prompt: typing.Union[str, typing.List[str]] image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor]] = None strength: float = 0.8 num_inference_steps: typing.Optional[int] = 50 guidance_scale: typing.Optional[float] = 7.5 source_guidance_scale: typing.Optional[float] = 1 num_images_per_prompt: typing.Optional[int] = 1 eta: typing.Optional[float] = 0.1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None clip_skip: typing.Optional[int] = None ) β†’ StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) β€”
13
+ The prompt or prompts to guide the image generation. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) β€”
14
+ Image or tensor representing an image batch to be used as the starting point. Can also accept image
15
+ latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) β€”
16
+ Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a
17
+ starting point and more noise is added the higher the strength. The number of denoising steps depends
18
+ on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising
19
+ process runs for the full number of iterations specified in num_inference_steps. A value of 1
20
+ essentially ignores image. num_inference_steps (int, optional, defaults to 50) β€”
21
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
22
+ expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) β€”
23
+ A higher guidance scale value encourages the model to generate images closely linked to the text
24
+ prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. source_guidance_scale (float, optional, defaults to 1) β€”
25
+ Guidance scale for the source prompt. This is useful to control the amount of influence the source
26
+ prompt has for encoding. num_images_per_prompt (int, optional, defaults to 1) β€”
27
+ The number of images to generate per prompt. eta (float, optional, defaults to 0.0) β€”
28
+ Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
29
+ to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β€”
30
+ A torch.Generator to make
31
+ generation deterministic. prompt_embeds (torch.FloatTensor, optional) β€”
32
+ Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
33
+ provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
34
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
35
+ not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") β€”
36
+ The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
37
+ Whether or not to return a StableDiffusionPipelineOutput instead of a
38
+ plain tuple. callback (Callable, optional) β€”
39
+ A function that calls every callback_steps steps during inference. The function is called with the
40
+ following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β€”
41
+ The frequency at which the callback function is called. If not specified, the callback is called at
42
+ every step. cross_attention_kwargs (dict, optional) β€”
43
+ A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
44
+ self.processor. clip_skip (int, optional) β€”
45
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
46
+ the output of the pre-final layer will be used for computing the prompt embeddings. Returns
47
+ StableDiffusionPipelineOutput or tuple
48
+
49
+ If return_dict is True, StableDiffusionPipelineOutput is returned,
50
+ otherwise a tuple is returned where the first element is a list with the generated images and the
51
+ second element is a list of bools indicating whether the corresponding generated image contains
52
+ β€œnot-safe-for-work” (nsfw) content.
53
+ The call function to the pipeline for generation. Example: Copied import requests
54
+ import torch
55
+ from PIL import Image
56
+ from io import BytesIO
57
+
58
+ from diffusers import CycleDiffusionPipeline, DDIMScheduler
59
+
60
+ # load the pipeline
61
+ # make sure you're logged in with `huggingface-cli login`
62
+ model_id_or_path = "CompVis/stable-diffusion-v1-4"
63
+ scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler")
64
+ pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda")
65
+
66
+ # let's download an initial image
67
+ url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png"
68
+ response = requests.get(url)
69
+ init_image = Image.open(BytesIO(response.content)).convert("RGB")
70
+ init_image = init_image.resize((512, 512))
71
+ init_image.save("horse.png")
72
+
73
+ # let's specify a prompt
74
+ source_prompt = "An astronaut riding a horse"
75
+ prompt = "An astronaut riding an elephant"
76
+
77
+ # call the pipeline
78
+ image = pipe(
79
+ prompt=prompt,
80
+ source_prompt=source_prompt,
81
+ image=init_image,
82
+ num_inference_steps=100,
83
+ eta=0.1,
84
+ strength=0.8,
85
+ guidance_scale=2,
86
+ source_guidance_scale=1,
87
+ ).images[0]
88
+
89
+ image.save("horse_to_elephant.png")
90
+
91
+ # let's try another example
92
+ # See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion
93
+ url = (
94
+ "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png"
95
+ )
96
+ response = requests.get(url)
97
+ init_image = Image.open(BytesIO(response.content)).convert("RGB")
98
+ init_image = init_image.resize((512, 512))
99
+ init_image.save("black.png")
100
+
101
+ source_prompt = "A black colored car"
102
+ prompt = "A blue colored car"
103
+
104
+ # call the pipeline
105
+ torch.manual_seed(0)
106
+ image = pipe(
107
+ prompt=prompt,
108
+ source_prompt=source_prompt,
109
+ image=init_image,
110
+ num_inference_steps=100,
111
+ eta=0.1,
112
+ strength=0.85,
113
+ guidance_scale=3,
114
+ source_guidance_scale=1,
115
+ ).images[0]
116
+
117
+ image.save("black_to_blue.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) β€”
118
+ prompt to be encoded
119
+ device β€” (torch.device):
120
+ torch device num_images_per_prompt (int) β€”
121
+ number of images that should be generated per prompt do_classifier_free_guidance (bool) β€”
122
+ whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β€”
123
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
124
+ negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
125
+ less than 1). prompt_embeds (torch.FloatTensor, optional) β€”
126
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
127
+ provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
128
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
129
+ weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
130
+ argument. lora_scale (float, optional) β€”
131
+ A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β€”
132
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
133
+ the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPiplineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] ) Parameters images (List[PIL.Image.Image] or np.ndarray) β€”
134
+ List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) β€”
135
+ List indicating whether the corresponding generated image contains β€œnot-safe-for-work” (nsfw) content or
136
+ None if safety checking could not be performed. Output class for Stable Diffusion pipelines.
scrapped_outputs/00efdfed25ed505d82383e1aa6f01ddb.txt ADDED
File without changes
scrapped_outputs/010878c4f61adff57a313b69bfbf36ee.txt ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ EulerAncestralDiscreteScheduler A scheduler that uses ancestral sampling with Euler method steps. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerAncestralDiscreteScheduler class diffusers.EulerAncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) β€”
2
+ The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) β€”
3
+ The starting beta value of inference. beta_end (float, defaults to 0.02) β€”
4
+ The final beta value. beta_schedule (str, defaults to "linear") β€”
5
+ The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
6
+ linear or scaled_linear. trained_betas (np.ndarray, optional) β€”
7
+ Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) β€”
8
+ Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process),
9
+ sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen
10
+ Video paper). timestep_spacing (str, defaults to "linspace") β€”
11
+ The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and
12
+ Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) β€”
13
+ An offset added to the inference steps. You can use a combination of offset=1 and
14
+ set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable
15
+ Diffusion. rescale_betas_zero_snr (bool, defaults to False) β€”
16
+ Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
17
+ dark samples instead of limiting it to samples with medium brightness. Loosely related to
18
+ --offset_noise. Ancestral sampling with Euler method steps. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic
19
+ methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) β†’ torch.FloatTensor Parameters sample (torch.FloatTensor) β€”
20
+ The input sample. timestep (int, optional) β€”
21
+ The current timestep in the diffusion chain. Returns
22
+ torch.FloatTensor
23
+
24
+ A scaled input sample.
25
+ Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
26
+ current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) β€”
27
+ The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) β€”
28
+ The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) β†’ EulerAncestralDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β€”
29
+ The direct output from learned diffusion model. timestep (float) β€”
30
+ The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β€”
31
+ A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) β€”
32
+ A random number generator. return_dict (bool) β€”
33
+ Whether or not to return a
34
+ EulerAncestralDiscreteSchedulerOutput or tuple. Returns
35
+ EulerAncestralDiscreteSchedulerOutput or tuple
36
+
37
+ If return_dict is True,
38
+ EulerAncestralDiscreteSchedulerOutput is returned,
39
+ otherwise a tuple is returned where the first element is the sample tensor.
40
+ Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
41
+ process from the learned model outputs (most often the predicted noise). EulerAncestralDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€”
42
+ Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the
43
+ denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€”
44
+ The predicted denoised sample (x_{0}) based on the model output from the current timestep.
45
+ pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output.
scrapped_outputs/010b61c1b09524892e674b81e6a567e2.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ ScoreSdeVpScheduler ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. 🚧 This scheduler is under construction! ScoreSdeVpScheduler class diffusers.schedulers.ScoreSdeVpScheduler < source > ( num_train_timesteps = 2000 beta_min = 0.1 beta_max = 20 sampling_eps = 0.001 ) Parameters num_train_timesteps (int, defaults to 2000) β€”
2
+ The number of diffusion steps to train the model. beta_min (int, defaults to 0.1) β€” beta_max (int, defaults to 20) β€” sampling_eps (int, defaults to 1e-3) β€”
3
+ The end value of sampling where timesteps decrease progressively from 1 to epsilon. ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic
4
+ methods the library implements for all schedulers such as loading and saving. set_timesteps < source > ( num_inference_steps device: Union = None ) Parameters num_inference_steps (int) β€”
5
+ The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) β€”
6
+ The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_pred < source > ( score x t generator = None ) Parameters score () β€” x () β€” t () β€” generator (torch.Generator, optional) β€”
7
+ A random number generator. Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
8
+ process from the learned model outputs (most often the predicted noise).
scrapped_outputs/013e30f4683bc1e82d2b6b2027109bad.txt ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Installing xFormers
2
+
3
+ We recommend the use of xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption.
4
+ Starting from version 0.0.16 of xFormers, released on January 2023, installation can be easily performed using pre-built pip wheels:
5
+
6
+
7
+ Copied
8
+ pip install xformers
9
+ The xFormers PIP package requires the latest version of PyTorch (1.13.1 as of xFormers 0.0.16). If you need to use a previous version of PyTorch, then we recommend you install xFormers from source using the project instructions.
10
+ After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption, as discussed here.
11
+ According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or Dreambooth) in some GPUs. If you observe that problem, please install a development version as indicated in that comment.
scrapped_outputs/014fb36531fe935112c5eaa247063735.txt ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ RePaint scheduler
2
+
3
+
4
+ Overview
5
+
6
+ DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks.
7
+ Intended for use with RePaintPipeline.
8
+ Based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models
9
+ and the original implementation by Andreas Lugmayr et al.: https://github.com/andreas128/RePaint
10
+
11
+ RePaintScheduler
12
+
13
+
14
+ class diffusers.RePaintScheduler
15
+
16
+ <
17
+ source
18
+ >
19
+ (
20
+ num_train_timesteps: int = 1000
21
+ beta_start: float = 0.0001
22
+ beta_end: float = 0.02
23
+ beta_schedule: str = 'linear'
24
+ eta: float = 0.0
25
+ trained_betas: typing.Optional[numpy.ndarray] = None
26
+ clip_sample: bool = True
27
+
28
+ )
29
+
30
+
31
+ Parameters
32
+
33
+ num_train_timesteps (int) β€” number of diffusion steps used to train the model.
34
+
35
+
36
+ beta_start (float) β€” the starting beta value of inference.
37
+
38
+
39
+ beta_end (float) β€” the final beta value.
40
+
41
+
42
+ beta_schedule (str) β€”
43
+ the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
44
+ linear, scaled_linear, or squaredcos_cap_v2.
45
+
46
+
47
+ eta (float) β€”
48
+ The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 -0.0 is DDIM and
49
+ 1.0 is DDPM scheduler respectively.
50
+
51
+
52
+ trained_betas (np.ndarray, optional) β€”
53
+ option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc.
54
+
55
+
56
+ variance_type (str) β€”
57
+ options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small,
58
+ fixed_small_log, fixed_large, fixed_large_log, learned or learned_range.
59
+
60
+
61
+ clip_sample (bool, default True) β€”
62
+ option to clip predicted sample between -1 and 1 for numerical stability.
63
+
64
+
65
+
66
+ RePaint is a schedule for DDPM inpainting inside a given mask.
67
+ ~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
68
+ function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps.
69
+ SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
70
+ from_pretrained() functions.
71
+ For more details, see the original paper: https://arxiv.org/pdf/2201.09865.pdf
72
+
73
+ scale_model_input
74
+
75
+ <
76
+ source
77
+ >
78
+ (
79
+ sample: FloatTensor
80
+ timestep: typing.Optional[int] = None
81
+
82
+ )
83
+ β†’
84
+ torch.FloatTensor
85
+
86
+ Parameters
87
+
88
+ sample (torch.FloatTensor) β€” input sample
89
+
90
+
91
+ timestep (int, optional) β€” current timestep
92
+
93
+
94
+ Returns
95
+
96
+ torch.FloatTensor
97
+
98
+
99
+
100
+ scaled input sample
101
+
102
+
103
+ Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
104
+ current timestep.
105
+
106
+ step
107
+
108
+ <
109
+ source
110
+ >
111
+ (
112
+ model_output: FloatTensor
113
+ timestep: int
114
+ sample: FloatTensor
115
+ original_image: FloatTensor
116
+ mask: FloatTensor
117
+ generator: typing.Optional[torch._C.Generator] = None
118
+ return_dict: bool = True
119
+
120
+ )
121
+ β†’
122
+ ~schedulers.scheduling_utils.RePaintSchedulerOutput or tuple
123
+
124
+ Parameters
125
+
126
+ model_output (torch.FloatTensor) β€” direct output from learned
127
+ diffusion model.
128
+
129
+
130
+ timestep (int) β€” current discrete timestep in the diffusion chain.
131
+
132
+
133
+ sample (torch.FloatTensor) β€”
134
+ current instance of sample being created by diffusion process.
135
+
136
+
137
+ original_image (torch.FloatTensor) β€”
138
+ the original image to inpaint on.
139
+
140
+
141
+ mask (torch.FloatTensor) β€”
142
+ the mask where 0.0 values define which part of the original image to inpaint (change).
143
+
144
+
145
+ generator (torch.Generator, optional) β€” random number generator.
146
+
147
+
148
+ return_dict (bool) β€” option for returning tuple rather than
149
+ DDPMSchedulerOutput class
150
+
151
+
152
+ Returns
153
+
154
+ ~schedulers.scheduling_utils.RePaintSchedulerOutput or tuple
155
+
156
+
157
+
158
+ ~schedulers.scheduling_utils.RePaintSchedulerOutput if return_dict is True, otherwise a tuple. When
159
+ returning a tuple, the first element is the sample tensor.
160
+
161
+
162
+ Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
163
+ process from the learned model outputs (most often the predicted noise).
scrapped_outputs/01a8586bc0784a4627557a3815ff5b5d.txt ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Stochastic Karras VE
2
+
3
+
4
+ Overview
5
+
6
+ Elucidating the Design Space of Diffusion-Based Generative Models by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine.
7
+ The abstract of the paper is the following:
8
+ We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of an existing ImageNet-64 model from 2.07 to near-SOTA 1.55.
9
+ This pipeline implements the Stochastic sampling tailored to the Variance-Expanding (VE) models.
10
+
11
+ Available Pipelines:
12
+
13
+ Pipeline
14
+ Tasks
15
+ Colab
16
+ pipeline_stochastic_karras_ve.py
17
+ Unconditional Image Generation
18
+ -
19
+
20
+ KarrasVePipeline
21
+
22
+
23
+ class diffusers.KarrasVePipeline
24
+
25
+ <
26
+ source
27
+ >
28
+ (
29
+ unet: UNet2DModel
30
+ scheduler: KarrasVeScheduler
31
+
32
+ )
33
+
34
+
35
+ Parameters
36
+
37
+ unet (UNet2DModel) β€” U-Net architecture to denoise the encoded image.
38
+
39
+
40
+ scheduler (KarrasVeScheduler) β€”
41
+ Scheduler for the diffusion process to be used in combination with unet to denoise the encoded image.
42
+
43
+
44
+
45
+ Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and
46
+ the VE column of Table 1 from [1] for reference.
47
+ [1] Karras, Tero, et al. β€œElucidating the Design Space of Diffusion-Based Generative Models.”
48
+ https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. β€œScore-based generative modeling through stochastic
49
+ differential equations.” https://arxiv.org/abs/2011.13456
50
+
51
+ __call__
52
+
53
+ <
54
+ source
55
+ >
56
+ (
57
+ batch_size: int = 1
58
+ num_inference_steps: int = 50
59
+ generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
60
+ output_type: typing.Optional[str] = 'pil'
61
+ return_dict: bool = True
62
+ **kwargs
63
+
64
+ )
65
+ β†’
66
+ ImagePipelineOutput or tuple
67
+
68
+ Parameters
69
+
70
+ batch_size (int, optional, defaults to 1) β€”
71
+ The number of images to generate.
72
+
73
+
74
+ generator (torch.Generator, optional) β€”
75
+ One or a list of torch generator(s)
76
+ to make generation deterministic.
77
+
78
+
79
+ num_inference_steps (int, optional, defaults to 50) β€”
80
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
81
+ expense of slower inference.
82
+
83
+
84
+ output_type (str, optional, defaults to "pil") β€”
85
+ The output format of the generate image. Choose between
86
+ PIL: PIL.Image.Image or np.array.
87
+
88
+
89
+ return_dict (bool, optional, defaults to True) β€”
90
+ Whether or not to return a ImagePipelineOutput instead of a plain tuple.
91
+
92
+
93
+ Returns
94
+
95
+ ImagePipelineOutput or tuple
96
+
97
+
98
+
99
+ ~pipelines.utils.ImagePipelineOutput if return_dict is
100
+ True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images.
scrapped_outputs/01be2bbed29849c60e5daa8454e05de7.txt ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Custom Pipelines
2
+
3
+ For more information about community pipelines, please have a look at this issue.
4
+ Community examples consist of both inference and training examples that have been added by the community.
5
+ Please have a look at the following table to get an overview of all community examples. Click on the Code Example to get a copy-and-paste ready code example that you can try out.
6
+ If a community doesn’t work as expected, please open an issue and ping the author on it.
7
+ Example
8
+ Description
9
+ Code Example
10
+ Colab
11
+ Author
12
+ CLIP Guided Stable Diffusion
13
+ Doing CLIP guidance for text to image generation with Stable Diffusion
14
+ CLIP Guided Stable Diffusion
15
+
16
+ Suraj Patil
17
+ One Step U-Net (Dummy)
18
+ Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841)
19
+ One Step U-Net
20
+ -
21
+ Patrick von Platen
22
+ Stable Diffusion Interpolation
23
+ Interpolate the latent space of Stable Diffusion between different prompts/seeds
24
+ Stable Diffusion Interpolation
25
+ -
26
+ Nate Raw
27
+ Stable Diffusion Mega
28
+ One Stable Diffusion Pipeline with all functionalities of Text2Image, Image2Image and Inpainting
29
+ Stable Diffusion Mega
30
+ -
31
+ Patrick von Platen
32
+ Long Prompt Weighting Stable Diffusion
33
+ One Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt.
34
+ Long Prompt Weighting Stable Diffusion
35
+ -
36
+ SkyTNT
37
+ Speech to Image
38
+ Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images
39
+ Speech to Image
40
+ -
41
+ Mikail Duzenli
42
+ To load a custom pipeline you just need to pass the custom_pipeline argument to DiffusionPipeline, as one of the files in diffusers/examples/community. Feel free to send a PR with your own pipelines, we will merge them quickly.
43
+
44
+
45
+ Copied
46
+ pipe = DiffusionPipeline.from_pretrained(
47
+ "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder"
48
+ )
49
+
50
+ Example usages
51
+
52
+
53
+ CLIP Guided Stable Diffusion
54
+
55
+ CLIP guided stable diffusion can help to generate more realistic images
56
+ by guiding stable diffusion at every denoising step with an additional CLIP model.
57
+ The following code requires roughly 12GB of GPU RAM.
58
+
59
+
60
+ Copied
61
+ from diffusers import DiffusionPipeline
62
+ from transformers import CLIPFeatureExtractor, CLIPModel
63
+ import torch
64
+
65
+
66
+ feature_extractor = CLIPFeatureExtractor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K")
67
+ clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16)
68
+
69
+
70
+ guided_pipeline = DiffusionPipeline.from_pretrained(
71
+ "CompVis/stable-diffusion-v1-4",
72
+ custom_pipeline="clip_guided_stable_diffusion",
73
+ clip_model=clip_model,
74
+ feature_extractor=feature_extractor,
75
+ torch_dtype=torch.float16,
76
+ )
77
+ guided_pipeline.enable_attention_slicing()
78
+ guided_pipeline = guided_pipeline.to("cuda")
79
+
80
+ prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
81
+
82
+ generator = torch.Generator(device="cuda").manual_seed(0)
83
+ images = []
84
+ for i in range(4):
85
+ image = guided_pipeline(
86
+ prompt,
87
+ num_inference_steps=50,
88
+ guidance_scale=7.5,
89
+ clip_guidance_scale=100,
90
+ num_cutouts=4,
91
+ use_cutouts=False,
92
+ generator=generator,
93
+ ).images[0]
94
+ images.append(image)
95
+
96
+ # save images locally
97
+ for i, img in enumerate(images):
98
+ img.save(f"./clip_guided_sd/image_{i}.png")
99
+ The images list contains a list of PIL images that can be saved locally or displayed directly in a google colab.
100
+ Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images:
101
+ .
102
+
103
+ One Step Unet
104
+
105
+ The dummy β€œone-step-unet” can be run as follows:
106
+
107
+
108
+ Copied
109
+ from diffusers import DiffusionPipeline
110
+
111
+ pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet")
112
+ pipe()
113
+ Note: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see https://github.com/huggingface/diffusers/issues/841).
114
+
115
+ Stable Diffusion Interpolation
116
+
117
+ The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes.
118
+
119
+
120
+ Copied
121
+ from diffusers import DiffusionPipeline
122
+ import torch
123
+
124
+ pipe = DiffusionPipeline.from_pretrained(
125
+ "CompVis/stable-diffusion-v1-4",
126
+ torch_dtype=torch.float16,
127
+ safety_checker=None, # Very important for videos...lots of false positives while interpolating
128
+ custom_pipeline="interpolate_stable_diffusion",
129
+ ).to("cuda")
130
+ pipe.enable_attention_slicing()
131
+
132
+ frame_filepaths = pipe.walk(
133
+ prompts=["a dog", "a cat", "a horse"],
134
+ seeds=[42, 1337, 1234],
135
+ num_interpolation_steps=16,
136
+ output_dir="./dreams",
137
+ batch_size=4,
138
+ height=512,
139
+ width=512,
140
+ guidance_scale=8.5,
141
+ num_inference_steps=50,
142
+ )
143
+ The output of the walk(...) function returns a list of images saved under the folder as defined in output_dir. You can use these images to create videos of stable diffusion.
144
+ Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality.
145
+
146
+ Stable Diffusion Mega
147
+
148
+ The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class.
149
+
150
+
151
+ Copied
152
+ #!/usr/bin/env python3
153
+ from diffusers import DiffusionPipeline
154
+ import PIL
155
+ import requests
156
+ from io import BytesIO
157
+ import torch
158
+
159
+
160
+ def download_image(url):
161
+ response = requests.get(url)
162
+ return PIL.Image.open(BytesIO(response.content)).convert("RGB")
163
+
164
+
165
+ pipe = DiffusionPipeline.from_pretrained(
166
+ "CompVis/stable-diffusion-v1-4",
167
+ custom_pipeline="stable_diffusion_mega",
168
+ torch_dtype=torch.float16,
169
+ )
170
+ pipe.to("cuda")
171
+ pipe.enable_attention_slicing()
172
+
173
+
174
+ ### Text-to-Image
175
+
176
+ images = pipe.text2img("An astronaut riding a horse").images
177
+
178
+ ### Image-to-Image
179
+
180
+ init_image = download_image(
181
+ "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
182
+ )
183
+
184
+ prompt = "A fantasy landscape, trending on artstation"
185
+
186
+ images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
187
+
188
+ ### Inpainting
189
+
190
+ img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
191
+ mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
192
+ init_image = download_image(img_url).resize((512, 512))
193
+ mask_image = download_image(mask_url).resize((512, 512))
194
+
195
+ prompt = "a cat sitting on a bench"
196
+ images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images
197
+ As shown above this one pipeline can run all both β€œtext-to-image”, β€œimage-to-image”, and β€œinpainting” in one pipeline.
198
+
199
+ Long Prompt Weighting Stable Diffusion
200
+
201
+ The Pipeline lets you input prompt without 77 token length limit. And you can increase words weighting by using ”()” or decrease words weighting by using ”[]”
202
+ The Pipeline also lets you use the main use cases of the stable diffusion pipeline in a single class.
203
+
204
+ pytorch
205
+
206
+
207
+
208
+ Copied
209
+ from diffusers import DiffusionPipeline
210
+ import torch
211
+
212
+ pipe = DiffusionPipeline.from_pretrained(
213
+ "hakurei/waifu-diffusion", custom_pipeline="lpw_stable_diffusion", torch_dtype=torch.float16
214
+ )
215
+ pipe = pipe.to("cuda")
216
+
217
+ prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms"
218
+ neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry"
219
+
220
+ pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
221
+
222
+ onnxruntime
223
+
224
+
225
+
226
+ Copied
227
+ from diffusers import DiffusionPipeline
228
+ import torch
229
+
230
+ pipe = DiffusionPipeline.from_pretrained(
231
+ "CompVis/stable-diffusion-v1-4",
232
+ custom_pipeline="lpw_stable_diffusion_onnx",
233
+ revision="onnx",
234
+ provider="CUDAExecutionProvider",
235
+ )
236
+
237
+ prompt = "a photo of an astronaut riding a horse on mars, best quality"
238
+ neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
239
+
240
+ pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
241
+ if you see Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors. Do not worry, it is normal.
242
+
243
+ Speech to Image
244
+
245
+ The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion.
246
+
247
+
248
+ Copied
249
+ import torch
250
+
251
+ import matplotlib.pyplot as plt
252
+ from datasets import load_dataset
253
+ from diffusers import DiffusionPipeline
254
+ from transformers import (
255
+ WhisperForConditionalGeneration,
256
+ WhisperProcessor,
257
+ )
258
+
259
+
260
+ device = "cuda" if torch.cuda.is_available() else "cpu"
261
+
262
+ ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
263
+
264
+ audio_sample = ds[3]
265
+
266
+ text = audio_sample["text"].lower()
267
+ speech_data = audio_sample["audio"]["array"]
268
+
269
+ model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device)
270
+ processor = WhisperProcessor.from_pretrained("openai/whisper-small")
271
+
272
+ diffuser_pipeline = DiffusionPipeline.from_pretrained(
273
+ "CompVis/stable-diffusion-v1-4",
274
+ custom_pipeline="speech_to_image_diffusion",
275
+ speech_model=model,
276
+ speech_processor=processor,
277
+
278
+ torch_dtype=torch.float16,
279
+ )
280
+
281
+ diffuser_pipeline.enable_attention_slicing()
282
+ diffuser_pipeline = diffuser_pipeline.to(device)
283
+
284
+ output = diffuser_pipeline(speech_data)
285
+ plt.imshow(output.images[0])
286
+ This example produces the following image:
scrapped_outputs/01d80081236d3aed18b8ca7aabd28034.txt ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ VQModel The VQ-VAE model was introduced in Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in πŸ€— Diffusers to decode latent representations into images. Unlike AutoencoderKL, the VQModel works in a quantized latent space. The abstract from the paper is: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of β€œposterior collapse” β€” where the latents are ignored when they are paired with a powerful autoregressive decoder β€” typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. VQModel class diffusers.VQModel < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 3 sample_size: int = 32 num_vq_embeddings: int = 256 norm_num_groups: int = 32 vq_embed_dim: Optional = None scaling_factor: float = 0.18215 norm_type: str = 'group' mid_block_add_attention = True lookup_from_codebook = False force_upcast = False ) Parameters in_channels (int, optional, defaults to 3) β€” Number of channels in the input image. out_channels (int, optional, defaults to 3) β€” Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) β€”
2
+ Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) β€”
3
+ Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) β€”
4
+ Tuple of block output channels. layers_per_block (int, optional, defaults to 1) β€” Number of layers per block. act_fn (str, optional, defaults to "silu") β€” The activation function to use. latent_channels (int, optional, defaults to 3) β€” Number of channels in the latent space. sample_size (int, optional, defaults to 32) β€” Sample input size. num_vq_embeddings (int, optional, defaults to 256) β€” Number of codebook vectors in the VQ-VAE. norm_num_groups (int, optional, defaults to 32) β€” Number of groups for normalization layers. vq_embed_dim (int, optional) β€” Hidden dim of codebook vectors in the VQ-VAE. scaling_factor (float, optional, defaults to 0.18215) β€”
5
+ The component-wise standard deviation of the trained latent space computed using the first batch of the
6
+ training set. This is used to scale the latent space to have unit variance when training the diffusion
7
+ model. The latents are scaled with the formula z = z * scaling_factor before being passed to the
8
+ diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image
9
+ Synthesis with Latent Diffusion Models paper. norm_type (str, optional, defaults to "group") β€”
10
+ Type of normalization layer to use. Can be one of "group" or "spatial". A VQ-VAE model for decoding latent representations. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented
11
+ for all models (such as downloading or saving). forward < source > ( sample: FloatTensor return_dict: bool = True ) β†’ VQEncoderOutput or tuple Parameters sample (torch.FloatTensor) β€” Input sample. return_dict (bool, optional, defaults to True) β€”
12
+ Whether or not to return a models.vq_model.VQEncoderOutput instead of a plain tuple. Returns
13
+ VQEncoderOutput or tuple
14
+
15
+ If return_dict is True, a VQEncoderOutput is returned, otherwise a plain tuple
16
+ is returned.
17
+ The VQModel forward method. VQEncoderOutput class diffusers.models.vq_model.VQEncoderOutput < source > ( latents: FloatTensor ) Parameters latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) β€”
18
+ The encoded output sample from the last layer of the model. Output of VQModel encoding method.
scrapped_outputs/01df407ddd0ca5935cbb0f71822a1c38.txt ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Paint by Example Paint by Example: Exemplar-based Image Editing with Diffusion Models is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. The abstract from the paper is: Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. The original codebase can be found at Fantasy-Studio/Paint-by-Example, and you can try it out in a demo. Tips Paint by Example is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint is warm-started from CompVis/stable-diffusion-v1-4 to inpaint partly masked images conditioned on example and reference images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PaintByExamplePipeline class diffusers.PaintByExamplePipeline < source > ( vae: AutoencoderKL image_encoder: PaintByExampleImageEncoder unet: UNet2DConditionModel scheduler: Union safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = False ) Parameters vae (AutoencoderKL) β€”
2
+ Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (PaintByExampleImageEncoder) β€”
3
+ Encodes the example input image. The unet is conditioned on the example image instead of a text prompt. tokenizer (CLIPTokenizer) β€”
4
+ A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β€”
5
+ A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
6
+ A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
7
+ DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) β€”
8
+ Classification module that estimates whether generated images could be considered offensive or harmful.
9
+ Please refer to the model card for more details
10
+ about a model’s potential harms. feature_extractor (CLIPImageProcessor) β€”
11
+ A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. πŸ§ͺ This is an experimental feature! Pipeline for image-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
12
+ implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( example_image: Union image: Union mask_image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) β†’ StableDiffusionPipelineOutput or tuple Parameters example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) β€”
13
+ An example image to guide image generation. image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) β€”
14
+ Image or tensor representing an image batch to be inpainted (parts of the image are masked out with
15
+ mask_image and repainted according to prompt). mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) β€”
16
+ Image or tensor representing an image batch to mask image. White pixels in the mask are repainted,
17
+ while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel
18
+ (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the
19
+ expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
20
+ The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
21
+ The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) β€”
22
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
23
+ expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
24
+ A higher guidance scale value encourages the model to generate images closely linked to the text
25
+ prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β€”
26
+ The prompt or prompts to guide what to not include in image generation. If not defined, you need to
27
+ pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) β€”
28
+ The number of images to generate per prompt. eta (float, optional, defaults to 0.0) β€”
29
+ Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
30
+ to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β€”
31
+ A torch.Generator to make
32
+ generation deterministic. latents (torch.FloatTensor, optional) β€”
33
+ Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
34
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
35
+ tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") β€”
36
+ The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
37
+ Whether or not to return a StableDiffusionPipelineOutput instead of a
38
+ plain tuple. callback (Callable, optional) β€”
39
+ A function that calls every callback_steps steps during inference. The function is called with the
40
+ following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β€”
41
+ The frequency at which the callback function is called. If not specified, the callback is called at
42
+ every step. Returns
43
+ StableDiffusionPipelineOutput or tuple
44
+
45
+ If return_dict is True, StableDiffusionPipelineOutput is returned,
46
+ otherwise a tuple is returned where the first element is a list with the generated images and the
47
+ second element is a list of bools indicating whether the corresponding generated image contains
48
+ β€œnot-safe-for-work” (nsfw) content.
49
+ The call function to the pipeline for generation. Example: Copied >>> import PIL
50
+ >>> import requests
51
+ >>> import torch
52
+ >>> from io import BytesIO
53
+ >>> from diffusers import PaintByExamplePipeline
54
+
55
+
56
+ >>> def download_image(url):
57
+ ... response = requests.get(url)
58
+ ... return PIL.Image.open(BytesIO(response.content)).convert("RGB")
59
+
60
+
61
+ >>> img_url = (
62
+ ... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png"
63
+ ... )
64
+ >>> mask_url = (
65
+ ... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png"
66
+ ... )
67
+ >>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg"
68
+
69
+ >>> init_image = download_image(img_url).resize((512, 512))
70
+ >>> mask_image = download_image(mask_url).resize((512, 512))
71
+ >>> example_image = download_image(example_url).resize((512, 512))
72
+
73
+ >>> pipe = PaintByExamplePipeline.from_pretrained(
74
+ ... "Fantasy-Studio/Paint-by-Example",
75
+ ... torch_dtype=torch.float16,
76
+ ... )
77
+ >>> pipe = pipe.to("cuda")
78
+
79
+ >>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0]
80
+ >>> image StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) β€”
81
+ List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) β€”
82
+ List indicating whether the corresponding generated image contains β€œnot-safe-for-work” (nsfw) content or
83
+ None if safety checking could not be performed. Output class for Stable Diffusion pipelines.
scrapped_outputs/0247f496918051ff626a635f40c86068.txt ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. Everything you need for inference or training is accessible with the from_pretrained() method. This guide will show you how to load: pipelines from the Hub and locally different components into a pipeline checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights models and schedulers Diffusion Pipeline πŸ’‘ Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works. The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. Copied from diffusers import DiffusionPipeline
2
+
3
+ repo_id = "runwayml/stable-diffusion-v1-5"
4
+ pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline
5
+
6
+ repo_id = "runwayml/stable-diffusion-v1-5"
7
+ pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class: Copied from diffusers import StableDiffusionImg2ImgPipeline
8
+
9
+ repo_id = "runwayml/stable-diffusion-v1-5"
10
+ pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs install
11
+ git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline
12
+
13
+ repo_id = "./stable-diffusion-v1-5"
14
+ stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. Swap components in a pipeline You can customize the default components of any pipeline with another compatible component. Customization is important because: Changing the scheduler is important for exploring the trade-off between generation speed and quality. Different components of a model are typically trained independently and you can swap out a component with a better-performing one. During finetuning, usually only some components - like the UNet or text encoder - are trained. To find out which schedulers are compatible for customization, you can use the compatibles method: Copied from diffusers import DiffusionPipeline
15
+
16
+ repo_id = "runwayml/stable-diffusion-v1-5"
17
+ stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
18
+ stable_diffusion.scheduler.compatibles Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository. Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline: Copied from diffusers import DiffusionPipeline, EulerDiscreteScheduler
19
+
20
+ repo_id = "runwayml/stable-diffusion-v1-5"
21
+ scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
22
+ stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True) Safety checker Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument: Copied from diffusers import DiffusionPipeline
23
+
24
+ repo_id = "runwayml/stable-diffusion-v1-5"
25
+ stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True)
26
+ """
27
+ You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
28
+ """ Reuse components across pipelines You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline
29
+
30
+ model_id = "runwayml/stable-diffusion-v1-5"
31
+ stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True)
32
+
33
+ components = stable_diffusion_txt2img.components Then you can pass the components to another pipeline without reloading the weights into RAM: Copied stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline
34
+
35
+ model_id = "runwayml/stable-diffusion-v1-5"
36
+ stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True)
37
+ stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(
38
+ vae=stable_diffusion_txt2img.vae,
39
+ text_encoder=stable_diffusion_txt2img.text_encoder,
40
+ tokenizer=stable_diffusion_txt2img.tokenizer,
41
+ unet=stable_diffusion_txt2img.unet,
42
+ scheduler=stable_diffusion_txt2img.scheduler,
43
+ safety_checker=None,
44
+ feature_extractor=None,
45
+ requires_safety_checker=False,
46
+ ) Checkpoint variants A checkpoint variant is usually a checkpoint whose weights are: Stored in a different floating point type for lower precision and lower storage, such as torch.float16, because it only requires half the bandwidth and storage to download. You can’t use this variant if you’re continuing training or using a CPU. Non-exponential mean averaged (EMA) weights, which shouldn’t be used for inference. You should use these to continue fine-tuning a model. πŸ’‘ When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5). Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights that have identical tensor shapes. checkpoint type weight name argument for loading weights original diffusion_pytorch_model.bin floating point diffusion_pytorch_model.fp16.bin variant, torch_dtype non-EMA diffusion_pytorch_model.non_ema.bin variant There are two important arguments to know for loading variants: torch_dtype defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a fp16 variant, you should specify torch_dtype=torch.float16 to convert the weights to fp16. Otherwise, the fp16 weights are converted to the default fp32 precision. You can also load the original checkpoint without defining the variant argument, and convert it to fp16 with torch_dtype=torch.float16. In this case, the default fp32 weights are downloaded first, and then they’re converted to fp16 after loading. variant defines which files should be loaded from the repository. For example, if you want to load a non_ema variant from the diffusers/stable-diffusion-variants repository, you should specify variant="non_ema" to download the non_ema files. Copied from diffusers import DiffusionPipeline
47
+ import torch
48
+
49
+ # load fp16 variant
50
+ stable_diffusion = DiffusionPipeline.from_pretrained(
51
+ "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True
52
+ )
53
+ # load non_ema variant
54
+ stable_diffusion = DiffusionPipeline.from_pretrained(
55
+ "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True
56
+ ) To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: Copied from diffusers import DiffusionPipeline
57
+
58
+ # save as fp16 variant
59
+ stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16")
60
+ # save as non-ema variant
61
+ stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint: Copied # πŸ‘Ž this won't work
62
+ stable_diffusion = DiffusionPipeline.from_pretrained(
63
+ "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
64
+ )
65
+ # πŸ‘ this works
66
+ stable_diffusion = DiffusionPipeline.from_pretrained(
67
+ "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True
68
+ ) Models Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them. Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder: Copied from diffusers import UNet2DConditionModel
69
+
70
+ repo_id = "runwayml/stable-diffusion-v1-5"
71
+ model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True) Or directly from a repository’s directory: Copied from diffusers import UNet2DModel
72
+
73
+ repo_id = "google/ddpm-cifar10-32"
74
+ model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained(): Copied from diffusers import UNet2DConditionModel
75
+
76
+ model = UNet2DConditionModel.from_pretrained(
77
+ "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True
78
+ )
79
+ model.save_pretrained("./local-unet", variant="non_ema") Schedulers Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file. Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers.
80
+ For example, the following schedulers are compatible with StableDiffusionPipeline, which means you can load the same scheduler configuration file in any of these classes: Copied from diffusers import StableDiffusionPipeline
81
+ from diffusers import (
82
+ DDPMScheduler,
83
+ DDIMScheduler,
84
+ PNDMScheduler,
85
+ LMSDiscreteScheduler,
86
+ EulerAncestralDiscreteScheduler,
87
+ EulerDiscreteScheduler,
88
+ DPMSolverMultistepScheduler,
89
+ )
90
+
91
+ repo_id = "runwayml/stable-diffusion-v1-5"
92
+
93
+ ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler")
94
+ ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler")
95
+ pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler")
96
+ lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
97
+ euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
98
+ euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
99
+ dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler")
100
+
101
+ # replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler`
102
+ pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True) DiffusionPipeline explained As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. Copied from diffusers import DiffusionPipeline
103
+
104
+ repo_id = "runwayml/stable-diffusion-v1-5"
105
+ pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
106
+ print(pipeline) You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: "feature_extractor": a CLIPImageProcessor from πŸ€— Transformers. "safety_checker": a component for screening against harmful content. "scheduler": an instance of PNDMScheduler. "text_encoder": a CLIPTextModel from πŸ€— Transformers. "tokenizer": a CLIPTokenizer from πŸ€— Transformers. "unet": an instance of UNet2DConditionModel. "vae": an instance of AutoencoderKL. Copied StableDiffusionPipeline {
107
+ "feature_extractor": [
108
+ "transformers",
109
+ "CLIPImageProcessor"
110
+ ],
111
+ "safety_checker": [
112
+ "stable_diffusion",
113
+ "StableDiffusionSafetyChecker"
114
+ ],
115
+ "scheduler": [
116
+ "diffusers",
117
+ "PNDMScheduler"
118
+ ],
119
+ "text_encoder": [
120
+ "transformers",
121
+ "CLIPTextModel"
122
+ ],
123
+ "tokenizer": [
124
+ "transformers",
125
+ "CLIPTokenizer"
126
+ ],
127
+ "unet": [
128
+ "diffusers",
129
+ "UNet2DConditionModel"
130
+ ],
131
+ "vae": [
132
+ "diffusers",
133
+ "AutoencoderKL"
134
+ ]
135
+ } Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied .
136
+ β”œβ”€β”€ feature_extractor
137
+ β”‚Β Β  └── preprocessor_config.json
138
+ β”œβ”€β”€ model_index.json
139
+ β”œβ”€β”€ safety_checker
140
+ β”‚Β Β  β”œβ”€β”€ config.json
141
+ | β”œβ”€β”€ model.fp16.safetensors
142
+ β”‚ β”œβ”€β”€ model.safetensors
143
+ β”‚ β”œβ”€β”€ pytorch_model.bin
144
+ | └── pytorch_model.fp16.bin
145
+ β”œβ”€β”€ scheduler
146
+ β”‚Β Β  └── scheduler_config.json
147
+ β”œβ”€β”€ text_encoder
148
+ β”‚Β Β  β”œβ”€β”€ config.json
149
+ | β”œβ”€β”€ model.fp16.safetensors
150
+ β”‚ β”œβ”€β”€ model.safetensors
151
+ β”‚ |── pytorch_model.bin
152
+ | └── pytorch_model.fp16.bin
153
+ β”œβ”€β”€ tokenizer
154
+ β”‚Β Β  β”œβ”€β”€ merges.txt
155
+ β”‚Β Β  β”œβ”€β”€ special_tokens_map.json
156
+ β”‚Β Β  β”œβ”€β”€ tokenizer_config.json
157
+ β”‚Β Β  └── vocab.json
158
+ β”œβ”€β”€ unet
159
+ β”‚Β Β  β”œβ”€β”€ config.json
160
+ β”‚Β Β  β”œβ”€β”€ diffusion_pytorch_model.bin
161
+ | |── diffusion_pytorch_model.fp16.bin
162
+ β”‚ |── diffusion_pytorch_model.f16.safetensors
163
+ β”‚ |── diffusion_pytorch_model.non_ema.bin
164
+ β”‚ |── diffusion_pytorch_model.non_ema.safetensors
165
+ β”‚ └── diffusion_pytorch_model.safetensors
166
+ |── vae
167
+ . β”œβ”€β”€ config.json
168
+ . β”œβ”€β”€ diffusion_pytorch_model.bin
169
+ β”œβ”€β”€ diffusion_pytorch_model.fp16.bin
170
+ β”œβ”€β”€ diffusion_pytorch_model.fp16.safetensors
171
+ └── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer
172
+ CLIPTokenizer(
173
+ name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer",
174
+ vocab_size=49408,
175
+ model_max_length=77,
176
+ is_fast=False,
177
+ padding_side="right",
178
+ truncation_side="right",
179
+ special_tokens={
180
+ "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
181
+ "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
182
+ "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
183
+ "pad_token": "<|endoftext|>",
184
+ },
185
+ clean_up_tokenization_spaces=True
186
+ ) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) Copied {
187
+ "_class_name": "StableDiffusionPipeline",
188
+ "_diffusers_version": "0.6.0",
189
+ "feature_extractor": [
190
+ "transformers",
191
+ "CLIPImageProcessor"
192
+ ],
193
+ "safety_checker": [
194
+ "stable_diffusion",
195
+ "StableDiffusionSafetyChecker"
196
+ ],
197
+ "scheduler": [
198
+ "diffusers",
199
+ "PNDMScheduler"
200
+ ],
201
+ "text_encoder": [
202
+ "transformers",
203
+ "CLIPTextModel"
204
+ ],
205
+ "tokenizer": [
206
+ "transformers",
207
+ "CLIPTokenizer"
208
+ ],
209
+ "unet": [
210
+ "diffusers",
211
+ "UNet2DConditionModel"
212
+ ],
213
+ "vae": [
214
+ "diffusers",
215
+ "AutoencoderKL"
216
+ ]
217
+ }
scrapped_outputs/024b6d495f66ffbe96d4b6dc2553b492.txt ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Performing inference with LCM-LoRA Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. However, each model needs to be distilled separately for latent consistency distillation. The core idea with LCM-LoRA is to train just a few adapter layers, the adapter being LoRA in this case.
2
+ This way, we don’t have to train the full model and keep the number of trainable parameters manageable. The resulting LoRAs can then be applied to any fine-tuned version of the model without distilling them separately.
3
+ Additionally, the LoRAs can be applied to image-to-image, ControlNet/T2I-Adapter, inpainting, AnimateDiff etc.
4
+ The LCM-LoRA can also be combined with other LoRAs to generate styled images in very few steps (4-8). LCM-LoRAs are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. For more details about LCM-LoRA, refer to the technical report. This guide shows how to perform inference with LCM-LoRAs for text-to-image image-to-image combined with styled LoRAs ControlNet/T2I-Adapter inpainting AnimateDiff Before going through this guide, we’ll take a look at the general workflow for performing inference with LCM-LoRAs.
5
+ LCM-LoRAs are similar to other Stable Diffusion LoRAs so they can be used with any DiffusionPipeline that supports LoRAs. Load the task specific pipeline and model. Set the scheduler to LCMScheduler. Load the LCM-LoRA weights for the model. Reduce the guidance_scale between [1.0, 2.0] and set the num_inference_steps between [4, 8]. Perform inference with the pipeline with the usual parameters. Let’s look at how we can perform inference with LCM-LoRAs for different tasks. First, make sure you have peft installed, for better LoRA support. Copied pip install -U peft Text-to-image You’ll use the StableDiffusionXLPipeline with the scheduler: LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied import torch
6
+ from diffusers import DiffusionPipeline, LCMScheduler
7
+
8
+ pipe = DiffusionPipeline.from_pretrained(
9
+ "stabilityai/stable-diffusion-xl-base-1.0",
10
+ variant="fp16",
11
+ torch_dtype=torch.float16
12
+ ).to("cuda")
13
+
14
+ # set scheduler
15
+ pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
16
+
17
+ # load LCM-LoRA
18
+ pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl")
19
+
20
+ prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
21
+
22
+ generator = torch.manual_seed(42)
23
+ image = pipe(
24
+ prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0
25
+ ).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. You may have noticed that we set guidance_scale=1.0, which disables classifer-free-guidance. This is because the LCM-LoRA is trained with guidance, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. You can also use guidance with LCM-LoRA, but due to the nature of training the model is very sensitve to the guidance_scale values, high values can lead to artifacts in the generated images. In our experiments, we found that the best values are in the range of [1.0, 2.0]. Inference with a fine-tuned model As mentioned above, the LCM-LoRA can be applied to any fine-tuned version of the model without having to distill them separately. Let’s look at how we can perform inference with a fine-tuned model. In this example, we’ll use the animagine-xl model, which is a fine-tuned version of the SDXL model for generating anime. Copied from diffusers import DiffusionPipeline, LCMScheduler
26
+
27
+ pipe = DiffusionPipeline.from_pretrained(
28
+ "Linaqruf/animagine-xl",
29
+ variant="fp16",
30
+ torch_dtype=torch.float16
31
+ ).to("cuda")
32
+
33
+ # set scheduler
34
+ pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
35
+
36
+ # load LCM-LoRA
37
+ pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl")
38
+
39
+ prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck"
40
+
41
+ generator = torch.manual_seed(0)
42
+ image = pipe(
43
+ prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0
44
+ ).images[0] Image-to-image LCM-LoRA can be applied to image-to-image tasks too. Let’s look at how we can perform image-to-image generation with LCMs. For this example we’ll use the dreamshaper-7 model and the LCM-LoRA for stable-diffusion-v1-5 . Copied import torch
45
+ from diffusers import AutoPipelineForImage2Image, LCMScheduler
46
+ from diffusers.utils import make_image_grid, load_image
47
+
48
+ pipe = AutoPipelineForImage2Image.from_pretrained(
49
+ "Lykon/dreamshaper-7",
50
+ torch_dtype=torch.float16,
51
+ variant="fp16",
52
+ ).to("cuda")
53
+
54
+ # set scheduler
55
+ pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
56
+
57
+ # load LCM-LoRA
58
+ pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
59
+
60
+ # prepare image
61
+ url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
62
+ init_image = load_image(url)
63
+ prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k"
64
+
65
+ # pass prompt and image to pipeline
66
+ generator = torch.manual_seed(0)
67
+ image = pipe(
68
+ prompt,
69
+ image=init_image,
70
+ num_inference_steps=4,
71
+ guidance_scale=1,
72
+ strength=0.6,
73
+ generator=generator
74
+ ).images[0]
75
+ make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with styled LoRAs LCM-LoRA can be combined with other LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the LCM-LoRA with the papercut LoRA.
76
+ To learn more about how to combine LoRAs, refer to this guide. Copied import torch
77
+ from diffusers import DiffusionPipeline, LCMScheduler
78
+
79
+ pipe = DiffusionPipeline.from_pretrained(
80
+ "stabilityai/stable-diffusion-xl-base-1.0",
81
+ variant="fp16",
82
+ torch_dtype=torch.float16
83
+ ).to("cuda")
84
+
85
+ # set scheduler
86
+ pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
87
+
88
+ # load LoRAs
89
+ pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm")
90
+ pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut")
91
+
92
+ # Combine LoRAs
93
+ pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8])
94
+
95
+ prompt = "papercut, a cute fox"
96
+ generator = torch.manual_seed(0)
97
+ image = pipe(prompt, num_inference_steps=4, guidance_scale=1, generator=generator).images[0]
98
+ image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and LCM-LoRA. ControlNet For this example, we’ll use the SD-v1-5 model and the LCM-LoRA for SD-v1-5 with canny ControlNet. Copied import torch
99
+ import cv2
100
+ import numpy as np
101
+ from PIL import Image
102
+
103
+ from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler
104
+ from diffusers.utils import load_image
105
+
106
+ image = load_image(
107
+ "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
108
+ ).resize((512, 512))
109
+
110
+ image = np.array(image)
111
+
112
+ low_threshold = 100
113
+ high_threshold = 200
114
+
115
+ image = cv2.Canny(image, low_threshold, high_threshold)
116
+ image = image[:, :, None]
117
+ image = np.concatenate([image, image, image], axis=2)
118
+ canny_image = Image.fromarray(image)
119
+
120
+ controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
121
+ pipe = StableDiffusionControlNetPipeline.from_pretrained(
122
+ "runwayml/stable-diffusion-v1-5",
123
+ controlnet=controlnet,
124
+ torch_dtype=torch.float16,
125
+ safety_checker=None,
126
+ variant="fp16"
127
+ ).to("cuda")
128
+
129
+ # set scheduler
130
+ pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
131
+
132
+ # load LCM-LoRA
133
+ pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
134
+
135
+ generator = torch.manual_seed(0)
136
+ image = pipe(
137
+ "the mona lisa",
138
+ image=canny_image,
139
+ num_inference_steps=4,
140
+ guidance_scale=1.5,
141
+ controlnet_conditioning_scale=0.8,
142
+ cross_attention_kwargs={"scale": 1},
143
+ generator=generator,
144
+ ).images[0]
145
+ make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one. T2I-Adapter This example shows how to use the LCM-LoRA with the Canny T2I-Adapter and SDXL. Copied import torch
146
+ import cv2
147
+ import numpy as np
148
+ from PIL import Image
149
+
150
+ from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, LCMScheduler
151
+ from diffusers.utils import load_image, make_image_grid
152
+
153
+ # Prepare image
154
+ # Detect the canny map in low resolution to avoid high-frequency details
155
+ image = load_image(
156
+ "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg"
157
+ ).resize((384, 384))
158
+
159
+ image = np.array(image)
160
+
161
+ low_threshold = 100
162
+ high_threshold = 200
163
+
164
+ image = cv2.Canny(image, low_threshold, high_threshold)
165
+ image = image[:, :, None]
166
+ image = np.concatenate([image, image, image], axis=2)
167
+ canny_image = Image.fromarray(image).resize((1024, 1024))
168
+
169
+ # load adapter
170
+ adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda")
171
+
172
+ pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
173
+ "stabilityai/stable-diffusion-xl-base-1.0",
174
+ adapter=adapter,
175
+ torch_dtype=torch.float16,
176
+ variant="fp16",
177
+ ).to("cuda")
178
+
179
+ # set scheduler
180
+ pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
181
+
182
+ # load LCM-LoRA
183
+ pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl")
184
+
185
+ prompt = "Mystical fairy in real, magic, 4k picture, high quality"
186
+ negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured"
187
+
188
+ generator = torch.manual_seed(0)
189
+ image = pipe(
190
+ prompt=prompt,
191
+ negative_prompt=negative_prompt,
192
+ image=canny_image,
193
+ num_inference_steps=4,
194
+ guidance_scale=1.5,
195
+ adapter_conditioning_scale=0.8,
196
+ adapter_conditioning_factor=1,
197
+ generator=generator,
198
+ ).images[0]
199
+ make_image_grid([canny_image, image], rows=1, cols=2) Inpainting LCM-LoRA can be used for inpainting as well. Copied import torch
200
+ from diffusers import AutoPipelineForInpainting, LCMScheduler
201
+ from diffusers.utils import load_image, make_image_grid
202
+
203
+ pipe = AutoPipelineForInpainting.from_pretrained(
204
+ "runwayml/stable-diffusion-inpainting",
205
+ torch_dtype=torch.float16,
206
+ variant="fp16",
207
+ ).to("cuda")
208
+
209
+ # set scheduler
210
+ pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
211
+
212
+ # load LCM-LoRA
213
+ pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
214
+
215
+ # load base and mask image
216
+ init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
217
+ mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")
218
+
219
+ # generator = torch.Generator("cuda").manual_seed(92)
220
+ prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
221
+ generator = torch.manual_seed(0)
222
+ image = pipe(
223
+ prompt=prompt,
224
+ image=init_image,
225
+ mask_image=mask_image,
226
+ generator=generator,
227
+ num_inference_steps=4,
228
+ guidance_scale=4,
229
+ ).images[0]
230
+ make_image_grid([init_image, mask_image, image], rows=1, cols=3) AnimateDiff AnimateDiff allows you to animate images using Stable Diffusion models. To get good results, we need to generate multiple frames (16-24), and doing this with standard SD models can be very slow.
231
+ LCM-LoRA can be used to speed up the process significantly, as you just need to do 4-8 steps for each frame. Let’s look at how we can perform animation with LCM-LoRA and AnimateDiff. Copied import torch
232
+ from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler, LCMScheduler
233
+ from diffusers.utils import export_to_gif
234
+
235
+ adapter = MotionAdapter.from_pretrained("diffusers/animatediff-motion-adapter-v1-5")
236
+ pipe = AnimateDiffPipeline.from_pretrained(
237
+ "frankjoshua/toonyou_beta6",
238
+ motion_adapter=adapter,
239
+ ).to("cuda")
240
+
241
+ # set scheduler
242
+ pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
243
+
244
+ # load LCM-LoRA
245
+ pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm")
246
+ pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora")
247
+
248
+ pipe.set_adapters(["lcm", "motion-lora"], adapter_weights=[0.55, 1.2])
249
+
250
+ prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress"
251
+ generator = torch.manual_seed(0)
252
+ frames = pipe(
253
+ prompt=prompt,
254
+ num_inference_steps=5,
255
+ guidance_scale=1.25,
256
+ cross_attention_kwargs={"scale": 1},
257
+ num_frames=24,
258
+ generator=generator
259
+ ).frames[0]
260
+ export_to_gif(frames, "animation.gif")
scrapped_outputs/029a71d92796bdac8ab84604964508c7.txt ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ UNet3DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in πŸ€— Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in πŸ€— Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet3DConditionModel class diffusers.UNet3DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') up_block_types: Tuple = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1024 attention_head_dim: Union = 64 num_attention_heads: Union = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) β€”
2
+ Height and width of input/output sample. in_channels (int, optional, defaults to 4) β€” The number of channels in the input sample. out_channels (int, optional, defaults to 4) β€” The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "DownBlock3D")) β€”
3
+ The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("UpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D")) β€”
4
+ The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) β€”
5
+ The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) β€” The number of layers per block. downsample_padding (int, optional, defaults to 1) β€” The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) β€” The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") β€” The activation function to use. norm_num_groups (int, optional, defaults to 32) β€” The number of groups to use for the normalization.
6
+ If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) β€” The epsilon to use for the normalization. cross_attention_dim (int, optional, defaults to 1024) β€” The dimension of the cross attention features. attention_head_dim (int, optional, defaults to 64) β€” The dimension of the attention heads. num_attention_heads (int, optional) β€” The number of attention heads. A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample
7
+ shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented
8
+ for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) β€”
9
+ The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually
10
+ over each tensor of dim=dim. dim (int, optional, defaults to 0) β€”
11
+ The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch)
12
+ or dim=1 (sequence length). Sets the attention processor to use feed forward
13
+ chunking. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) β€”
14
+ Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
15
+ mitigate the β€œoversmoothing effect” in the enhanced denoising process. s2 (float) β€”
16
+ Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
17
+ mitigate the β€œoversmoothing effect” in the enhanced denoising process. b1 (float) β€” Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) β€” Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that
18
+ are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) β†’ ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) β€”
19
+ The noisy input tensor with the following shape (batch, num_channels, num_frames, height, width. timestep (torch.FloatTensor or float or int) β€” The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) β€”
20
+ The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) β€”
21
+ Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
22
+ timestep_cond β€” (torch.Tensor, optional, defaults to None):
23
+ Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed
24
+ through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) β€”
25
+ An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask
26
+ is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large
27
+ negative values to the attention scores corresponding to β€œdiscard” tokens. cross_attention_kwargs (dict, optional) β€”
28
+ A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under
29
+ self.processor in
30
+ diffusers.models.attention_processor.
31
+ down_block_additional_residuals β€” (tuple of torch.Tensor, optional):
32
+ A tuple of tensors that if specified are added to the residuals of down unet blocks.
33
+ mid_block_additional_residual β€” (torch.Tensor, optional):
34
+ A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) β€”
35
+ Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain
36
+ tuple. cross_attention_kwargs (dict, optional) β€”
37
+ A kwargs dictionary that if specified is passed along to the AttnProcessor. Returns
38
+ ~models.unet_3d_condition.UNet3DConditionOutput or tuple
39
+
40
+ If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise
41
+ a tuple is returned where the first element is the sample tensor.
42
+ The UNet3DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query,
43
+ key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is πŸ§ͺ experimental. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") β€”
44
+ When "auto", input to the attention heads is halved, so attention is computed in two steps. If
45
+ "max", maximum amount of memory is saved by running only one slice at a time. If a number is
46
+ provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim
47
+ must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in
48
+ several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) β€”
49
+ The instantiated processor class or a dictionary of processor classes that will be set as the processor
50
+ for all Attention layers.
51
+ If processor is a dict, the key needs to define the path to the corresponding cross attention
52
+ processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is πŸ§ͺ experimental. unload_lora < source > ( ) Unloads LoRA weights. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, num_frames, height, width)) β€”
53
+ The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel.
scrapped_outputs/02a8a2246909676ce154902d0be79029.txt ADDED
File without changes
scrapped_outputs/02aee9759affa29fb25ab0383cbb3c8d.txt ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ UNet2DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in πŸ€— Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in πŸ€— Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DConditionModel class diffusers.UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 center_input_sample: bool = False flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 dropout: float = 0.0 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 reverse_transformer_layers_per_block: Optional = None encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None dual_cross_attention: bool = False use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' resnet_skip_time_act: bool = False resnet_out_scale_factor: int = 1.0 time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None attention_type: str = 'default' class_embeddings_concat: bool = False mid_block_only_cross_attention: Optional = None cross_attention_norm: Optional = None addition_embed_type_num_heads = 64 ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) β€”
2
+ Height and width of input/output sample. in_channels (int, optional, defaults to 4) β€” Number of channels in the input sample. out_channels (int, optional, defaults to 4) β€” Number of channels in the output. center_input_sample (bool, optional, defaults to False) β€” Whether to center the input sample. flip_sin_to_cos (bool, optional, defaults to False) β€”
3
+ Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) β€” The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) β€”
4
+ The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") β€”
5
+ Block type for middle of UNet, it can be one of UNetMidBlock2DCrossAttn, UNetMidBlock2D, or
6
+ UNetMidBlock2DSimpleCrossAttn. If None, the mid block layer is skipped. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) β€”
7
+ The tuple of upsample blocks to use. only_cross_attention(bool or Tuple[bool], optional, default to False) β€”
8
+ Whether to include self-attention in the basic transformer blocks, see
9
+ BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) β€”
10
+ The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) β€” The number of layers per block. downsample_padding (int, optional, defaults to 1) β€” The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) β€” The scale factor to use for the mid block. dropout (float, optional, defaults to 0.0) β€” The dropout probability to use. act_fn (str, optional, defaults to "silu") β€” The activation function to use. norm_num_groups (int, optional, defaults to 32) β€” The number of groups to use for the normalization.
11
+ If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) β€” The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) β€”
12
+ The dimension of the cross attention features. transformer_layers_per_block (int, Tuple[int], or Tuple[Tuple] , optional, defaults to 1) β€”
13
+ The number of transformer blocks of type BasicTransformerBlock. Only relevant for
14
+ CrossAttnDownBlock2D, CrossAttnUpBlock2D,
15
+ UNetMidBlock2DCrossAttn. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample
16
+ shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented
17
+ for all models (such as downloading or saving). reverse_transformer_layers_per_block : (Tuple[Tuple], optional, defaults to None):
18
+ The number of transformer blocks of type BasicTransformerBlock, in the upsampling
19
+ blocks of the U-Net. Only relevant if transformer_layers_per_block is of type Tuple[Tuple] and for
20
+ CrossAttnDownBlock2D, CrossAttnUpBlock2D,
21
+ UNetMidBlock2DCrossAttn.
22
+ encoder_hid_dim (int, optional, defaults to None):
23
+ If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim
24
+ dimension to cross_attention_dim.
25
+ encoder_hid_dim_type (str, optional, defaults to None):
26
+ If given, the encoder_hidden_states and potentially other embeddings are down-projected to text
27
+ embeddings of dimension cross_attention according to encoder_hid_dim_type.
28
+ attention_head_dim (int, optional, defaults to 8): The dimension of the attention heads.
29
+ num_attention_heads (int, optional):
30
+ The number of attention heads. If not defined, defaults to attention_head_dim
31
+ resnet_time_scale_shift (str, optional, defaults to "default"): Time scale shift config
32
+ for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift.
33
+ class_embed_type (str, optional, defaults to None):
34
+ The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None,
35
+ "timestep", "identity", "projection", or "simple_projection".
36
+ addition_embed_type (str, optional, defaults to None):
37
+ Configures an optional embedding which will be summed with the time embeddings. Choose from None or
38
+ β€œtext”. β€œtext” will use the TextTimeEmbedding layer.
39
+ addition_time_embed_dim: (int, optional, defaults to None):
40
+ Dimension for the timestep embeddings.
41
+ num_class_embeds (int, optional, defaults to None):
42
+ Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing
43
+ class conditioning with class_embed_type equal to None.
44
+ time_embedding_type (str, optional, defaults to positional):
45
+ The type of position embedding to use for timesteps. Choose from positional or fourier.
46
+ time_embedding_dim (int, optional, defaults to None):
47
+ An optional override for the dimension of the projected time embedding.
48
+ time_embedding_act_fn (str, optional, defaults to None):
49
+ Optional activation function to use only once on the time embeddings before they are passed to the rest of
50
+ the UNet. Choose from silu, mish, gelu, and swish.
51
+ timestep_post_act (str, optional, defaults to None):
52
+ The second activation function to use in timestep embedding. Choose from silu, mish and gelu.
53
+ time_cond_proj_dim (int, optional, defaults to None):
54
+ The dimension of cond_proj layer in the timestep embedding.
55
+ conv_in_kernel (int, optional, default to 3): The kernel size of conv_in layer. conv_out_kernel (int,
56
+ optional, default to 3): The kernel size of conv_out layer. projection_class_embeddings_input_dim (int,
57
+ optional): The dimension of the class_labels input when
58
+ class_embed_type="projection". Required when class_embed_type="projection".
59
+ class_embeddings_concat (bool, optional, defaults to False): Whether to concatenate the time
60
+ embeddings with the class embeddings.
61
+ mid_block_only_cross_attention (bool, optional, defaults to None):
62
+ Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If
63
+ only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the
64
+ only_cross_attention value is used as the value for mid_block_only_cross_attention. Default to False
65
+ otherwise. disable_freeu < source > ( ) Disables the FreeU mechanism. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) β€”
66
+ Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
67
+ mitigate the β€œoversmoothing effect” in the enhanced denoising process. s2 (float) β€”
68
+ Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
69
+ mitigate the β€œoversmoothing effect” in the enhanced denoising process. b1 (float) β€” Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) β€” Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that
70
+ are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None down_intrablock_additional_residuals: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) β†’ UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) β€”
71
+ The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) β€” The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) β€”
72
+ The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) β€”
73
+ Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
74
+ timestep_cond β€” (torch.Tensor, optional, defaults to None):
75
+ Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed
76
+ through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) β€”
77
+ An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask
78
+ is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large
79
+ negative values to the attention scores corresponding to β€œdiscard” tokens. cross_attention_kwargs (dict, optional) β€”
80
+ A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under
81
+ self.processor in
82
+ diffusers.models.attention_processor.
83
+ added_cond_kwargs β€” (dict, optional):
84
+ A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that
85
+ are passed along to the UNet blocks.
86
+ down_block_additional_residuals β€” (tuple of torch.Tensor, optional):
87
+ A tuple of tensors that if specified are added to the residuals of down unet blocks.
88
+ mid_block_additional_residual β€” (torch.Tensor, optional):
89
+ A tensor that if specified is added to the residual of the middle unet block. encoder_attention_mask (torch.Tensor) β€”
90
+ A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If
91
+ True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias,
92
+ which adds large negative values to the attention scores corresponding to β€œdiscard” tokens. return_dict (bool, optional, defaults to True) β€”
93
+ Whether or not to return a UNet2DConditionOutput instead of a plain
94
+ tuple. cross_attention_kwargs (dict, optional) β€”
95
+ A kwargs dictionary that if specified is passed along to the AttnProcessor.
96
+ added_cond_kwargs β€” (dict, optional):
97
+ A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that
98
+ are passed along to the UNet blocks. down_block_additional_residuals (tuple of torch.Tensor, optional) β€”
99
+ additional residuals to be added to UNet long skip connections from down blocks to up blocks for
100
+ example from ControlNet side model(s) mid_block_additional_residual (torch.Tensor, optional) β€”
101
+ additional residual to be added to UNet mid block output, for example from ControlNet side model down_intrablock_additional_residuals (tuple of torch.Tensor, optional) β€”
102
+ additional residuals to be added within UNet down blocks, for example from T2I-Adapter side model(s) Returns
103
+ UNet2DConditionOutput or tuple
104
+
105
+ If return_dict is True, an UNet2DConditionOutput is returned, otherwise
106
+ a tuple is returned where the first element is the sample tensor.
107
+ The UNet2DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query,
108
+ key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is πŸ§ͺ experimental. set_attention_slice < source > ( slice_size ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") β€”
109
+ When "auto", input to the attention heads is halved, so attention is computed in two steps. If
110
+ "max", maximum amount of memory is saved by running only one slice at a time. If a number is
111
+ provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim
112
+ must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in
113
+ several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) β€”
114
+ The instantiated processor class or a dictionary of processor classes that will be set as the processor
115
+ for all Attention layers.
116
+ If processor is a dict, the key needs to define the path to the corresponding cross attention
117
+ processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is πŸ§ͺ experimental. UNet2DConditionOutput class diffusers.models.unet_2d_condition.UNet2DConditionOutput < source > ( sample: FloatTensor = None ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) β€”
118
+ The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet2DConditionModel. FlaxUNet2DConditionModel class diffusers.FlaxUNet2DConditionModel < source > ( sample_size: int = 32 in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = <class 'jax.numpy.float32'> flip_sin_to_cos: bool = True freq_shift: int = 0 use_memory_efficient_attention: bool = False split_head_dim: bool = False transformer_layers_per_block: Union = 1 addition_embed_type: Optional = None addition_time_embed_dim: Optional = None addition_embed_type_num_heads: int = 64 projection_class_embeddings_input_dim: Optional = None parent: Union = <flax.linen.module._Sentinel object at 0x7fb48fdbfdf0> name: Optional = None ) Parameters sample_size (int, optional) β€”
119
+ The size of the input sample. in_channels (int, optional, defaults to 4) β€”
120
+ The number of channels in the input sample. out_channels (int, optional, defaults to 4) β€”
121
+ The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) β€”
122
+ The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("FlaxUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D")) β€”
123
+ The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) β€”
124
+ The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) β€”
125
+ The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) β€”
126
+ The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) β€”
127
+ The number of attention heads. cross_attention_dim (int, optional, defaults to 768) β€”
128
+ The dimension of the cross attention features. dropout (float, optional, defaults to 0) β€”
129
+ Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) β€”
130
+ Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) β€” The frequency shift to apply to the time embedding. use_memory_efficient_attention (bool, optional, defaults to False) β€”
131
+ Enable memory efficient attention as described here. split_head_dim (bool, optional, defaults to False) β€”
132
+ Whether to split the head dimension into a new axis for the self-attention computation. In most cases,
133
+ enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample
134
+ shaped output. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods
135
+ implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module
136
+ subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its
137
+ general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxUNet2DConditionOutput class diffusers.models.unet_2d_condition_flax.FlaxUNet2DConditionOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) β€”
138
+ The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of FlaxUNet2DConditionModel. replace < source > ( **updates ) β€œReturns a new object replacing the specified fields with new values.
scrapped_outputs/02bd848b35977a9c9f00ad003cb069ef.txt ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ How to use Stable Diffusion in Apple Silicon (M1/M2)
2
+
3
+ πŸ€— Diffusers is compatible with Apple silicon for Stable Diffusion inference, using the PyTorch mps device. These are the steps you need to follow to use your M1 or M2 computer with Stable Diffusion.
4
+
5
+ Requirements
6
+
7
+ Mac computer with Apple silicon (M1/M2) hardware.
8
+ macOS 12.6 or later (13.0 or later recommended).
9
+ arm64 version of Python.
10
+ PyTorch 1.13. You can install it with pip or conda using the instructions in https://pytorch.org/get-started/locally/.
11
+
12
+ Inference Pipeline
13
+
14
+ The snippet below demonstrates how to use the mps backend using the familiar to() interface to move the Stable Diffusion pipeline to your M1 or M2 device.
15
+ We recommend to β€œprime” the pipeline using an additional one-time pass through it. This is a temporary workaround for a weird issue we have detected: the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and it’s ok to use just one inference step and discard the result.
16
+
17
+
18
+ Copied
19
+ # make sure you're logged in with `huggingface-cli login`
20
+ from diffusers import StableDiffusionPipeline
21
+
22
+ pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
23
+ pipe = pipe.to("mps")
24
+
25
+ # Recommended if your computer has < 64 GB of RAM
26
+ pipe.enable_attention_slicing()
27
+
28
+ prompt = "a photo of an astronaut riding a horse on mars"
29
+
30
+ # First-time "warmup" pass (see explanation above)
31
+ _ = pipe(prompt, num_inference_steps=1)
32
+
33
+ # Results match those from the CPU device after the warmup pass.
34
+ image = pipe(prompt).images[0]
35
+
36
+ Performance Recommendations
37
+
38
+ M1/M2 performance is very sensitive to memory pressure. The system will automatically swap if it needs to, but performance will degrade significantly when it does.
39
+ We recommend you use attention slicing to reduce memory pressure during inference and prevent swapping, particularly if your computer has lass than 64 GB of system RAM, or if you generate images at non-standard resolutions larger than 512 Γ— 512 pixels. Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually has a performance impact of ~20% in computers without universal memory, but we have observed better performance in most Apple Silicon computers, unless you have 64 GB or more.
40
+
41
+
42
+ Copied
43
+ pipeline.enable_attention_slicing()
44
+
45
+ Known Issues
46
+
47
+ As mentioned above, we are investigating a strange first-time inference issue.
48
+ Generating multiple prompts in a batch crashes or doesn’t work reliably. We believe this is related to the mps backend in PyTorch. This is being resolved, but for now we recommend to iterate instead of batching.
scrapped_outputs/031de0c7e6fbc268b733b53d76fd629b.txt ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch
2
+ from diffusers import DiffusionPipeline, AutoencoderTiny
3
+
4
+ pipe = DiffusionPipeline.from_pretrained(
5
+ "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16
6
+ )
7
+ pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16)
8
+ pipe = pipe.to("cuda")
9
+
10
+ prompt = "slice of delicious New York-style berry cheesecake"
11
+ image = pipe(prompt, num_inference_steps=25).images[0]
12
+ image To use with Stable Diffusion XL 1.0 Copied import torch
13
+ from diffusers import DiffusionPipeline, AutoencoderTiny
14
+
15
+ pipe = DiffusionPipeline.from_pretrained(
16
+ "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
17
+ )
18
+ pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16)
19
+ pipe = pipe.to("cuda")
20
+
21
+ prompt = "slice of delicious New York-style berry cheesecake"
22
+ image = pipe(prompt, num_inference_steps=25).images[0]
23
+ image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) β€” Number of channels in the input image. out_channels (int, optional, defaults to 3) β€” Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) β€”
24
+ Tuple of integers representing the number of output channels for each encoder block. The length of the
25
+ tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) β€”
26
+ Tuple of integers representing the number of output channels for each decoder block. The length of the
27
+ tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") β€”
28
+ Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) β€”
29
+ Number of channels in the latent representation. The latent space acts as a compressed representation of
30
+ the input image. upsampling_scaling_factor (int, optional, defaults to 2) β€”
31
+ Scaling factor for upsampling in the decoder. It determines the size of the output image during the
32
+ upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) β€”
33
+ Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The
34
+ length of the tuple should be equal to the number of stages in the encoder. Each stage has a different
35
+ number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) β€”
36
+ Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The
37
+ length of the tuple should be equal to the number of stages in the decoder. Each stage has a different
38
+ number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) β€”
39
+ Magnitude of the latent representation. This parameter scales the latent representation values to control
40
+ the extent of information preservation. latent_shift (float, optional, defaults to 0.5) β€”
41
+ Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) β€”
42
+ The component-wise standard deviation of the trained latent space computed using the first batch of the
43
+ training set. This is used to scale the latent space to have unit variance when training the diffusion
44
+ model. The latents are scaled with the formula z = z * scaling_factor before being passed to the
45
+ diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image
46
+ Synthesis with Latent Diffusion Models paper. For this Autoencoder,
47
+ however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) β€”
48
+ If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
49
+ can be fine-tuned / trained to a lower range without losing too much precision, in which case
50
+ force_upcast can be set to False (see this fp16-friendly
51
+ AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for
52
+ all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing
53
+ decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing
54
+ decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
55
+ compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
56
+ compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
57
+ processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) β€” Input sample. return_dict (bool, optional, defaults to True) β€”
58
+ Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) β€” Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method.
scrapped_outputs/0337e3a463f82d01341bcedbe24ef622.txt ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. Everything you need for inference or training is accessible with the from_pretrained() method. This guide will show you how to load: pipelines from the Hub and locally different components into a pipeline checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights models and schedulers Diffusion Pipeline πŸ’‘ Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works. The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. Copied from diffusers import DiffusionPipeline
2
+
3
+ repo_id = "runwayml/stable-diffusion-v1-5"
4
+ pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline
5
+
6
+ repo_id = "runwayml/stable-diffusion-v1-5"
7
+ pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class: Copied from diffusers import StableDiffusionImg2ImgPipeline
8
+
9
+ repo_id = "runwayml/stable-diffusion-v1-5"
10
+ pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs install
11
+ git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline
12
+
13
+ repo_id = "./stable-diffusion-v1-5"
14
+ stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. Swap components in a pipeline You can customize the default components of any pipeline with another compatible component. Customization is important because: Changing the scheduler is important for exploring the trade-off between generation speed and quality. Different components of a model are typically trained independently and you can swap out a component with a better-performing one. During finetuning, usually only some components - like the UNet or text encoder - are trained. To find out which schedulers are compatible for customization, you can use the compatibles method: Copied from diffusers import DiffusionPipeline
15
+
16
+ repo_id = "runwayml/stable-diffusion-v1-5"
17
+ stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
18
+ stable_diffusion.scheduler.compatibles Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository. Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline: Copied from diffusers import DiffusionPipeline, EulerDiscreteScheduler
19
+
20
+ repo_id = "runwayml/stable-diffusion-v1-5"
21
+ scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
22
+ stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True) Safety checker Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument: Copied from diffusers import DiffusionPipeline
23
+
24
+ repo_id = "runwayml/stable-diffusion-v1-5"
25
+ stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True)
26
+ """
27
+ You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
28
+ """ Reuse components across pipelines You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline
29
+
30
+ model_id = "runwayml/stable-diffusion-v1-5"
31
+ stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True)
32
+
33
+ components = stable_diffusion_txt2img.components Then you can pass the components to another pipeline without reloading the weights into RAM: Copied stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline
34
+
35
+ model_id = "runwayml/stable-diffusion-v1-5"
36
+ stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True)
37
+ stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(
38
+ vae=stable_diffusion_txt2img.vae,
39
+ text_encoder=stable_diffusion_txt2img.text_encoder,
40
+ tokenizer=stable_diffusion_txt2img.tokenizer,
41
+ unet=stable_diffusion_txt2img.unet,
42
+ scheduler=stable_diffusion_txt2img.scheduler,
43
+ safety_checker=None,
44
+ feature_extractor=None,
45
+ requires_safety_checker=False,
46
+ ) Checkpoint variants A checkpoint variant is usually a checkpoint whose weights are: Stored in a different floating point type for lower precision and lower storage, such as torch.float16, because it only requires half the bandwidth and storage to download. You can’t use this variant if you’re continuing training or using a CPU. Non-exponential mean averaged (EMA) weights, which shouldn’t be used for inference. You should use these to continue fine-tuning a model. πŸ’‘ When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5). Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights that have identical tensor shapes. checkpoint type weight name argument for loading weights original diffusion_pytorch_model.bin floating point diffusion_pytorch_model.fp16.bin variant, torch_dtype non-EMA diffusion_pytorch_model.non_ema.bin variant There are two important arguments to know for loading variants: torch_dtype defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a fp16 variant, you should specify torch_dtype=torch.float16 to convert the weights to fp16. Otherwise, the fp16 weights are converted to the default fp32 precision. You can also load the original checkpoint without defining the variant argument, and convert it to fp16 with torch_dtype=torch.float16. In this case, the default fp32 weights are downloaded first, and then they’re converted to fp16 after loading. variant defines which files should be loaded from the repository. For example, if you want to load a non_ema variant from the diffusers/stable-diffusion-variants repository, you should specify variant="non_ema" to download the non_ema files. Copied from diffusers import DiffusionPipeline
47
+ import torch
48
+
49
+ # load fp16 variant
50
+ stable_diffusion = DiffusionPipeline.from_pretrained(
51
+ "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True
52
+ )
53
+ # load non_ema variant
54
+ stable_diffusion = DiffusionPipeline.from_pretrained(
55
+ "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True
56
+ ) To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: Copied from diffusers import DiffusionPipeline
57
+
58
+ # save as fp16 variant
59
+ stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16")
60
+ # save as non-ema variant
61
+ stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint: Copied # πŸ‘Ž this won't work
62
+ stable_diffusion = DiffusionPipeline.from_pretrained(
63
+ "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
64
+ )
65
+ # πŸ‘ this works
66
+ stable_diffusion = DiffusionPipeline.from_pretrained(
67
+ "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True
68
+ ) Models Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them. Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder: Copied from diffusers import UNet2DConditionModel
69
+
70
+ repo_id = "runwayml/stable-diffusion-v1-5"
71
+ model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True) Or directly from a repository’s directory: Copied from diffusers import UNet2DModel
72
+
73
+ repo_id = "google/ddpm-cifar10-32"
74
+ model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained(): Copied from diffusers import UNet2DConditionModel
75
+
76
+ model = UNet2DConditionModel.from_pretrained(
77
+ "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True
78
+ )
79
+ model.save_pretrained("./local-unet", variant="non_ema") Schedulers Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file. Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers.
80
+ For example, the following schedulers are compatible with StableDiffusionPipeline, which means you can load the same scheduler configuration file in any of these classes: Copied from diffusers import StableDiffusionPipeline
81
+ from diffusers import (
82
+ DDPMScheduler,
83
+ DDIMScheduler,
84
+ PNDMScheduler,
85
+ LMSDiscreteScheduler,
86
+ EulerAncestralDiscreteScheduler,
87
+ EulerDiscreteScheduler,
88
+ DPMSolverMultistepScheduler,
89
+ )
90
+
91
+ repo_id = "runwayml/stable-diffusion-v1-5"
92
+
93
+ ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler")
94
+ ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler")
95
+ pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler")
96
+ lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
97
+ euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
98
+ euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
99
+ dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler")
100
+
101
+ # replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler`
102
+ pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True) DiffusionPipeline explained As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. Copied from diffusers import DiffusionPipeline
103
+
104
+ repo_id = "runwayml/stable-diffusion-v1-5"
105
+ pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
106
+ print(pipeline) You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: "feature_extractor": a CLIPImageProcessor from πŸ€— Transformers. "safety_checker": a component for screening against harmful content. "scheduler": an instance of PNDMScheduler. "text_encoder": a CLIPTextModel from πŸ€— Transformers. "tokenizer": a CLIPTokenizer from πŸ€— Transformers. "unet": an instance of UNet2DConditionModel. "vae": an instance of AutoencoderKL. Copied StableDiffusionPipeline {
107
+ "feature_extractor": [
108
+ "transformers",
109
+ "CLIPImageProcessor"
110
+ ],
111
+ "safety_checker": [
112
+ "stable_diffusion",
113
+ "StableDiffusionSafetyChecker"
114
+ ],
115
+ "scheduler": [
116
+ "diffusers",
117
+ "PNDMScheduler"
118
+ ],
119
+ "text_encoder": [
120
+ "transformers",
121
+ "CLIPTextModel"
122
+ ],
123
+ "tokenizer": [
124
+ "transformers",
125
+ "CLIPTokenizer"
126
+ ],
127
+ "unet": [
128
+ "diffusers",
129
+ "UNet2DConditionModel"
130
+ ],
131
+ "vae": [
132
+ "diffusers",
133
+ "AutoencoderKL"
134
+ ]
135
+ } Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied .
136
+ β”œβ”€β”€ feature_extractor
137
+ β”‚Β Β  └── preprocessor_config.json
138
+ β”œβ”€β”€ model_index.json
139
+ β”œβ”€β”€ safety_checker
140
+ β”‚Β Β  β”œβ”€β”€ config.json
141
+ | β”œβ”€β”€ model.fp16.safetensors
142
+ β”‚ β”œβ”€β”€ model.safetensors
143
+ β”‚ β”œβ”€β”€ pytorch_model.bin
144
+ | └── pytorch_model.fp16.bin
145
+ β”œβ”€β”€ scheduler
146
+ β”‚Β Β  └── scheduler_config.json
147
+ β”œβ”€β”€ text_encoder
148
+ β”‚Β Β  β”œβ”€β”€ config.json
149
+ | β”œβ”€β”€ model.fp16.safetensors
150
+ β”‚ β”œβ”€β”€ model.safetensors
151
+ β”‚ |── pytorch_model.bin
152
+ | └── pytorch_model.fp16.bin
153
+ β”œβ”€β”€ tokenizer
154
+ β”‚Β Β  β”œβ”€β”€ merges.txt
155
+ β”‚Β Β  β”œβ”€β”€ special_tokens_map.json
156
+ β”‚Β Β  β”œβ”€β”€ tokenizer_config.json
157
+ β”‚Β Β  └── vocab.json
158
+ β”œβ”€β”€ unet
159
+ β”‚Β Β  β”œβ”€β”€ config.json
160
+ β”‚Β Β  β”œβ”€β”€ diffusion_pytorch_model.bin
161
+ | |── diffusion_pytorch_model.fp16.bin
162
+ β”‚ |── diffusion_pytorch_model.f16.safetensors
163
+ β”‚ |── diffusion_pytorch_model.non_ema.bin
164
+ β”‚ |── diffusion_pytorch_model.non_ema.safetensors
165
+ β”‚ └── diffusion_pytorch_model.safetensors
166
+ |── vae
167
+ . β”œβ”€β”€ config.json
168
+ . β”œβ”€β”€ diffusion_pytorch_model.bin
169
+ β”œβ”€β”€ diffusion_pytorch_model.fp16.bin
170
+ β”œβ”€β”€ diffusion_pytorch_model.fp16.safetensors
171
+ └── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer
172
+ CLIPTokenizer(
173
+ name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer",
174
+ vocab_size=49408,
175
+ model_max_length=77,
176
+ is_fast=False,
177
+ padding_side="right",
178
+ truncation_side="right",
179
+ special_tokens={
180
+ "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
181
+ "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
182
+ "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
183
+ "pad_token": "<|endoftext|>",
184
+ },
185
+ clean_up_tokenization_spaces=True
186
+ ) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) Copied {
187
+ "_class_name": "StableDiffusionPipeline",
188
+ "_diffusers_version": "0.6.0",
189
+ "feature_extractor": [
190
+ "transformers",
191
+ "CLIPImageProcessor"
192
+ ],
193
+ "safety_checker": [
194
+ "stable_diffusion",
195
+ "StableDiffusionSafetyChecker"
196
+ ],
197
+ "scheduler": [
198
+ "diffusers",
199
+ "PNDMScheduler"
200
+ ],
201
+ "text_encoder": [
202
+ "transformers",
203
+ "CLIPTextModel"
204
+ ],
205
+ "tokenizer": [
206
+ "transformers",
207
+ "CLIPTokenizer"
208
+ ],
209
+ "unet": [
210
+ "diffusers",
211
+ "UNet2DConditionModel"
212
+ ],
213
+ "vae": [
214
+ "diffusers",
215
+ "AutoencoderKL"
216
+ ]
217
+ }
scrapped_outputs/0355b252e25654dc434b0da048d15629.txt ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Community pipelines For more context about the design choices behind community pipelines, please have a look at this issue. Community pipelines allow you to get creative and build your own unique pipelines to share with the community. You can find all community pipelines in the diffusers/examples/community folder along with inference and training examples for how to use them. This guide showcases some of the community pipelines and hopefully it’ll inspire you to create your own (feel free to open a PR with your own pipeline and we will merge it!). To load a community pipeline, use the custom_pipeline argument in DiffusionPipeline to specify one of the files in diffusers/examples/community: Copied from diffusers import DiffusionPipeline
2
+
3
+ pipe = DiffusionPipeline.from_pretrained(
4
+ "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder", use_safetensors=True
5
+ ) If a community pipeline doesn’t work as expected, please open a GitHub issue and mention the author. You can learn more about community pipelines in the how to load community pipelines and how to contribute a community pipeline guides. Multilingual Stable Diffusion The multilingual Stable Diffusion pipeline uses a pretrained XLM-RoBERTa to identify a language and the mBART-large-50 model to handle the translation. This allows you to generate images from text in 20 languages. Copied import torch
6
+ from diffusers import DiffusionPipeline
7
+ from diffusers.utils import make_image_grid
8
+ from transformers import (
9
+ pipeline,
10
+ MBart50TokenizerFast,
11
+ MBartForConditionalGeneration,
12
+ )
13
+
14
+ device = "cuda" if torch.cuda.is_available() else "cpu"
15
+ device_dict = {"cuda": 0, "cpu": -1}
16
+
17
+ # add language detection pipeline
18
+ language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection"
19
+ language_detection_pipeline = pipeline("text-classification",
20
+ model=language_detection_model_ckpt,
21
+ device=device_dict[device])
22
+
23
+ # add model for language translation
24
+ translation_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
25
+ translation_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device)
26
+
27
+ diffuser_pipeline = DiffusionPipeline.from_pretrained(
28
+ "CompVis/stable-diffusion-v1-4",
29
+ custom_pipeline="multilingual_stable_diffusion",
30
+ detection_pipeline=language_detection_pipeline,
31
+ translation_model=translation_model,
32
+ translation_tokenizer=translation_tokenizer,
33
+ torch_dtype=torch.float16,
34
+ )
35
+
36
+ diffuser_pipeline.enable_attention_slicing()
37
+ diffuser_pipeline = diffuser_pipeline.to(device)
38
+
39
+ prompt = ["a photograph of an astronaut riding a horse",
40
+ "Una casa en la playa",
41
+ "Ein Hund, der Orange isst",
42
+ "Un restaurant parisien"]
43
+
44
+ images = diffuser_pipeline(prompt).images
45
+ make_image_grid(images, rows=2, cols=2) MagicMix MagicMix is a pipeline that can mix an image and text prompt to generate a new image that preserves the image structure. The mix_factor determines how much influence the prompt has on the layout generation, kmin controls the number of steps during the content generation process, and kmax determines how much information is kept in the layout of the original image. Copied from diffusers import DiffusionPipeline, DDIMScheduler
46
+ from diffusers.utils import load_image, make_image_grid
47
+
48
+ pipeline = DiffusionPipeline.from_pretrained(
49
+ "CompVis/stable-diffusion-v1-4",
50
+ custom_pipeline="magic_mix",
51
+ scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"),
52
+ ).to('cuda')
53
+
54
+ img = load_image("https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg")
55
+ mix_img = pipeline(img, prompt="bed", kmin=0.3, kmax=0.5, mix_factor=0.5)
56
+ make_image_grid([img, mix_img], rows=1, cols=2) original image image and text prompt mix
scrapped_outputs/035d2eb81551ae17f2f6548c483bb4ce.txt ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0).
2
+ It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query,
3
+ key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently πŸ§ͺ experimental in nature and can change in future. LoRAAttnProcessor class diffusers.models.attention_processor.LoRAAttnProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) β€”
4
+ The hidden size of the attention layer. cross_attention_dim (int, optional) β€”
5
+ The number of channels in the encoder_hidden_states. rank (int, defaults to 4) β€”
6
+ The dimension of the LoRA update matrices. network_alpha (int, optional) β€”
7
+ Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) β€”
8
+ Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism. LoRAAttnProcessor2_0 class diffusers.models.attention_processor.LoRAAttnProcessor2_0 < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int) β€”
9
+ The hidden size of the attention layer. cross_attention_dim (int, optional) β€”
10
+ The number of channels in the encoder_hidden_states. rank (int, defaults to 4) β€”
11
+ The dimension of the LoRA update matrices. network_alpha (int, optional) β€”
12
+ Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) β€”
13
+ Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism using PyTorch 2.0’s memory-efficient scaled dot-product
14
+ attention. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) β€”
15
+ Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) β€”
16
+ Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) β€”
17
+ The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
18
+ The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) β€”
19
+ Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) β€”
20
+ The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) β€”
21
+ Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) β€”
22
+ Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) β€”
23
+ The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
24
+ The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) β€”
25
+ Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) β€”
26
+ The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled
27
+ dot-product attention. AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text
28
+ encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra
29
+ learnable key and value matrices for the text encoder. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) β€”
30
+ The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
31
+ The number of channels in the encoder_hidden_states. rank (int, defaults to 4) β€”
32
+ The dimension of the LoRA update matrices. network_alpha (int, optional) β€”
33
+ Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) β€”
34
+ Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text
35
+ encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) β€”
36
+ The base
37
+ operator to
38
+ use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best
39
+ operator. Processor for implementing memory efficient attention using xFormers. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) β€”
40
+ The hidden size of the attention layer. cross_attention_dim (int, optional) β€”
41
+ The number of channels in the encoder_hidden_states. rank (int, defaults to 4) β€”
42
+ The dimension of the LoRA update matrices. attention_op (Callable, optional, defaults to None) β€”
43
+ The base
44
+ operator to
45
+ use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best
46
+ operator. network_alpha (int, optional) β€”
47
+ Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) β€”
48
+ Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. CustomDiffusionXFormersAttnProcessor class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None ) Parameters train_kv (bool, defaults to True) β€”
49
+ Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) β€”
50
+ Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) β€”
51
+ The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
52
+ The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) β€”
53
+ Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) β€”
54
+ The dropout probability to use. attention_op (Callable, optional, defaults to None) β€”
55
+ The base
56
+ operator to use
57
+ as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. SlicedAttnProcessor class diffusers.models.attention_processor.SlicedAttnProcessor < source > ( slice_size: int ) Parameters slice_size (int, optional) β€”
58
+ The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and
59
+ attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention. SlicedAttnAddedKVProcessor class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor < source > ( slice_size ) Parameters slice_size (int, optional) β€”
60
+ The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and
61
+ attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder.
scrapped_outputs/037a312aaecccf6bc6297a4be6c94e34.txt ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Semantic Guidance Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Text-to-Image Models using Semantic Guidance and provides strong semantic control over image generation.
2
+ Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition. The abstract from the paper is: Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SemanticStableDiffusionPipeline class diffusers.SemanticStableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) β€”
3
+ Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
4
+ Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
5
+ A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β€”
6
+ A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
7
+ A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
8
+ DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (Q16SafetyChecker) β€”
9
+ Classification module that estimates whether generated images could be considered offensive or harmful.
10
+ Please refer to the model card for more details
11
+ about a model’s potential harms. feature_extractor (CLIPImageProcessor) β€”
12
+ A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with latent editing. This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass
13
+ documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular
14
+ device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 editing_prompt: Union = None editing_prompt_embeddings: Optional = None reverse_editing_direction: Union = False edit_guidance_scale: Union = 5 edit_warmup_steps: Union = 10 edit_cooldown_steps: Union = None edit_threshold: Union = 0.9 edit_momentum_scale: Optional = 0.1 edit_mom_beta: Optional = 0.4 edit_weights: Optional = None sem_guidance: Optional = None ) β†’ ~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) β€”
15
+ The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
16
+ The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
17
+ The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) β€”
18
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
19
+ expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
20
+ A higher guidance scale value encourages the model to generate images closely linked to the text
21
+ prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β€”
22
+ The prompt or prompts to guide what to not include in image generation. If not defined, you need to
23
+ pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) β€”
24
+ The number of images to generate per prompt. eta (float, optional, defaults to 0.0) β€”
25
+ Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
26
+ to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β€”
27
+ A torch.Generator to make
28
+ generation deterministic. latents (torch.FloatTensor, optional) β€”
29
+ Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
30
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
31
+ tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") β€”
32
+ The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
33
+ Whether or not to return a StableDiffusionPipelineOutput instead of a
34
+ plain tuple. callback (Callable, optional) β€”
35
+ A function that calls every callback_steps steps during inference. The function is called with the
36
+ following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β€”
37
+ The frequency at which the callback function is called. If not specified, the callback is called at
38
+ every step. editing_prompt (str or List[str], optional) β€”
39
+ The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting
40
+ editing_prompt = None. Guidance direction of prompt should be specified via
41
+ reverse_editing_direction. editing_prompt_embeddings (torch.Tensor, optional) β€”
42
+ Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be
43
+ specified via reverse_editing_direction. reverse_editing_direction (bool or List[bool], optional, defaults to False) β€”
44
+ Whether the corresponding prompt in editing_prompt should be increased or decreased. edit_guidance_scale (float or List[float], optional, defaults to 5) β€”
45
+ Guidance scale for semantic guidance. If provided as a list, values should correspond to
46
+ editing_prompt. edit_warmup_steps (float or List[float], optional, defaults to 10) β€”
47
+ Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is
48
+ calculated for those steps and applied once all warmup periods are over. edit_cooldown_steps (float or List[float], optional, defaults to None) β€”
49
+ Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. edit_threshold (float or List[float], optional, defaults to 0.9) β€”
50
+ Threshold of semantic guidance. edit_momentum_scale (float, optional, defaults to 0.1) β€”
51
+ Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0,
52
+ momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than
53
+ sld_warmup_steps). Momentum is only added to latent guidance once all warmup periods are finished. edit_mom_beta (float, optional, defaults to 0.4) β€”
54
+ Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous
55
+ momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than
56
+ edit_warmup_steps). edit_weights (List[float], optional, defaults to None) β€”
57
+ Indicates how much each individual concept should influence the overall guidance. If no weights are
58
+ provided all concepts are applied equally. sem_guidance (List[torch.Tensor], optional) β€”
59
+ List of pre-generated guidance vectors to be applied at generation. Length of the list has to
60
+ correspond to num_inference_steps. Returns
61
+ ~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple
62
+
63
+ If return_dict is True,
64
+ ~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput is returned, otherwise a
65
+ tuple is returned where the first element is a list with the generated images and the second element
66
+ is a list of bools indicating whether the corresponding generated image contains β€œnot-safe-for-work”
67
+ (nsfw) content.
68
+ The call function to the pipeline for generation. Examples: Copied >>> import torch
69
+ >>> from diffusers import SemanticStableDiffusionPipeline
70
+
71
+ >>> pipe = SemanticStableDiffusionPipeline.from_pretrained(
72
+ ... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16
73
+ ... )
74
+ >>> pipe = pipe.to("cuda")
75
+
76
+ >>> out = pipe(
77
+ ... prompt="a photo of the face of a woman",
78
+ ... num_images_per_prompt=1,
79
+ ... guidance_scale=7,
80
+ ... editing_prompt=[
81
+ ... "smiling, smile", # Concepts to apply
82
+ ... "glasses, wearing glasses",
83
+ ... "curls, wavy hair, curly hair",
84
+ ... "beard, full beard, mustache",
85
+ ... ],
86
+ ... reverse_editing_direction=[
87
+ ... False,
88
+ ... False,
89
+ ... False,
90
+ ... False,
91
+ ... ], # Direction of guidance i.e. increase all concepts
92
+ ... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept
93
+ ... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept
94
+ ... edit_threshold=[
95
+ ... 0.99,
96
+ ... 0.975,
97
+ ... 0.925,
98
+ ... 0.96,
99
+ ... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions
100
+ ... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance
101
+ ... edit_mom_beta=0.6, # Momentum beta
102
+ ... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other
103
+ ... )
104
+ >>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) β€”
105
+ List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) β€”
106
+ List indicating whether the corresponding generated image contains β€œnot-safe-for-work” (nsfw) content or
107
+ None if safety checking could not be performed. Output class for Stable Diffusion pipelines.
scrapped_outputs/039174a093290e2204530344edb27be3.txt ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ControlNet ControlNet is a type of model for controlling image diffusion models by conditioning the model with an additional input image. There are many types of conditioning inputs (canny edge, user sketching, human pose, depth, and more) you can use to control a diffusion model. This is hugely useful because it affords you greater control over image generation, making it easier to generate specific images without experimenting with different text prompts or denoising values as much. Check out Section 3.5 of the ControlNet paper v1 for a list of ControlNet implementations on various conditioning inputs. You can find the official Stable Diffusion ControlNet conditioned models on lllyasviel’s Hub profile, and more community-trained ones on the Hub. For Stable Diffusion XL (SDXL) ControlNet models, you can find them on the πŸ€— Diffusers Hub organization, or you can browse community-trained ones on the Hub. A ControlNet model has two sets of weights (or blocks) connected by a zero-convolution layer: a locked copy keeps everything a large pretrained diffusion model has learned a trainable copy is trained on the additional conditioning input Since the locked copy preserves the pretrained model, training and implementing a ControlNet on a new conditioning input is as fast as finetuning any other model because you aren’t training the model from scratch. This guide will show you how to use ControlNet for text-to-image, image-to-image, inpainting, and more! There are many types of ControlNet conditioning inputs to choose from, but in this guide we’ll only focus on several of them. Feel free to experiment with other conditioning inputs! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab
2
+ #!pip install -q diffusers transformers accelerate opencv-python Text-to-image For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let’s condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline. Load an image and use the opencv-python library to extract the canny image: Copied from diffusers.utils import load_image, make_image_grid
3
+ from PIL import Image
4
+ import cv2
5
+ import numpy as np
6
+
7
+ original_image = load_image(
8
+ "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
9
+ )
10
+
11
+ image = np.array(original_image)
12
+
13
+ low_threshold = 100
14
+ high_threshold = 200
15
+
16
+ image = cv2.Canny(image, low_threshold, high_threshold)
17
+ image = image[:, :, None]
18
+ image = np.concatenate([image, image, image], axis=2)
19
+ canny_image = Image.fromarray(image) original image canny image Next, load a ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
20
+ import torch
21
+
22
+ controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True)
23
+ pipe = StableDiffusionControlNetPipeline.from_pretrained(
24
+ "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True
25
+ )
26
+
27
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
28
+ pipe.enable_model_cpu_offload() Now pass your prompt and canny image to the pipeline: Copied output = pipe(
29
+ "the mona lisa", image=canny_image
30
+ ).images[0]
31
+ make_image_grid([original_image, canny_image, output], rows=1, cols=3) Image-to-image For image-to-image, you’d typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let’s condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information. You’ll use the StableDiffusionControlNetImg2ImgPipeline for this task, which is different from the StableDiffusionControlNetPipeline because it allows you to pass an initial image as the starting point for the image generation process. Load an image and use the depth-estimation Pipeline from πŸ€— Transformers to extract the depth map of an image: Copied import torch
32
+ import numpy as np
33
+
34
+ from transformers import pipeline
35
+ from diffusers.utils import load_image, make_image_grid
36
+
37
+ image = load_image(
38
+ "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg"
39
+ )
40
+
41
+ def get_depth_map(image, depth_estimator):
42
+ image = depth_estimator(image)["depth"]
43
+ image = np.array(image)
44
+ image = image[:, :, None]
45
+ image = np.concatenate([image, image, image], axis=2)
46
+ detected_map = torch.from_numpy(image).float() / 255.0
47
+ depth_map = detected_map.permute(2, 0, 1)
48
+ return depth_map
49
+
50
+ depth_estimator = pipeline("depth-estimation")
51
+ depth_map = get_depth_map(image, depth_estimator).unsqueeze(0).half().to("cuda") Next, load a ControlNet model conditioned on depth maps and pass it to the StableDiffusionControlNetImg2ImgPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler
52
+ import torch
53
+
54
+ controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, use_safetensors=True)
55
+ pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
56
+ "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True
57
+ )
58
+
59
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
60
+ pipe.enable_model_cpu_offload() Now pass your prompt, initial image, and depth map to the pipeline: Copied output = pipe(
61
+ "lego batman and robin", image=image, control_image=depth_map,
62
+ ).images[0]
63
+ make_image_grid([image, output], rows=1, cols=2) original image generated image Inpainting For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area. Load an initial image and a mask image: Copied from diffusers.utils import load_image, make_image_grid
64
+
65
+ init_image = load_image(
66
+ "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg"
67
+ )
68
+ init_image = init_image.resize((512, 512))
69
+
70
+ mask_image = load_image(
71
+ "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg"
72
+ )
73
+ mask_image = mask_image.resize((512, 512))
74
+ make_image_grid([init_image, mask_image], rows=1, cols=2) Create a function to prepare the control image from the initial and mask images. This’ll create a tensor to mark the pixels in init_image as masked if the corresponding pixel in mask_image is over a certain threshold. Copied import numpy as np
75
+ import torch
76
+
77
+ def make_inpaint_condition(image, image_mask):
78
+ image = np.array(image.convert("RGB")).astype(np.float32) / 255.0
79
+ image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0
80
+
81
+ assert image.shape[0:1] == image_mask.shape[0:1]
82
+ image[image_mask > 0.5] = -1.0 # set as masked pixel
83
+ image = np.expand_dims(image, 0).transpose(0, 3, 1, 2)
84
+ image = torch.from_numpy(image)
85
+ return image
86
+
87
+ control_image = make_inpaint_condition(init_image, mask_image) original image mask image Load a ControlNet model conditioned on inpainting and pass it to the StableDiffusionControlNetInpaintPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler
88
+
89
+ controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True)
90
+ pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained(
91
+ "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True
92
+ )
93
+
94
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
95
+ pipe.enable_model_cpu_offload() Now pass your prompt, initial image, mask image, and control image to the pipeline: Copied output = pipe(
96
+ "corgi face with large ears, detailed, pixar, animated, disney",
97
+ num_inference_steps=20,
98
+ eta=1.0,
99
+ image=init_image,
100
+ mask_image=mask_image,
101
+ control_image=control_image,
102
+ ).images[0]
103
+ make_image_grid([init_image, mask_image, output], rows=1, cols=3) Guess mode Guess mode does not require supplying a prompt to a ControlNet at all! This forces the ControlNet encoder to do it’s best to β€œguess” the contents of the input control map (depth map, pose estimation, canny edge, etc.). Guess mode adjusts the scale of the output residuals from a ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to 0.1, and as the blocks get deeper, the scale increases exponentially such that the scale of the MidBlock output becomes 1.0. Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want. Set guess_mode=True in the pipeline, and it is recommended to set the guidance_scale value between 3.0 and 5.0. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
104
+ from diffusers.utils import load_image, make_image_grid
105
+ import numpy as np
106
+ import torch
107
+ from PIL import Image
108
+ import cv2
109
+
110
+ controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", use_safetensors=True)
111
+ pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, use_safetensors=True).to("cuda")
112
+
113
+ original_image = load_image("https://huggingface.co/takuma104/controlnet_dev/resolve/main/bird_512x512.png")
114
+
115
+ image = np.array(original_image)
116
+
117
+ low_threshold = 100
118
+ high_threshold = 200
119
+
120
+ image = cv2.Canny(image, low_threshold, high_threshold)
121
+ image = image[:, :, None]
122
+ image = np.concatenate([image, image, image], axis=2)
123
+ canny_image = Image.fromarray(image)
124
+
125
+ image = pipe("", image=canny_image, guess_mode=True, guidance_scale=3.0).images[0]
126
+ make_image_grid([original_image, canny_image, image], rows=1, cols=3) regular mode with prompt guess mode without prompt ControlNet with Stable Diffusion XL There aren’t too many ControlNet models compatible with Stable Diffusion XL (SDXL) at the moment, but we’ve trained two full-sized ControlNet models for SDXL conditioned on canny edge detection and depth maps. We’re also experimenting with creating smaller versions of these SDXL-compatible ControlNet models so it is easier to run on resource-constrained hardware. You can find these checkpoints on the πŸ€— Diffusers Hub organization! Let’s use a SDXL ControlNet conditioned on canny images to generate an image. Start by loading an image and prepare the canny image: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL
127
+ from diffusers.utils import load_image, make_image_grid
128
+ from PIL import Image
129
+ import cv2
130
+ import numpy as np
131
+ import torch
132
+
133
+ original_image = load_image(
134
+ "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
135
+ )
136
+
137
+ image = np.array(original_image)
138
+
139
+ low_threshold = 100
140
+ high_threshold = 200
141
+
142
+ image = cv2.Canny(image, low_threshold, high_threshold)
143
+ image = image[:, :, None]
144
+ image = np.concatenate([image, image, image], axis=2)
145
+ canny_image = Image.fromarray(image)
146
+ make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image Load a SDXL ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionXLControlNetPipeline. You can also enable model offloading to reduce memory usage. Copied controlnet = ControlNetModel.from_pretrained(
147
+ "diffusers/controlnet-canny-sdxl-1.0",
148
+ torch_dtype=torch.float16,
149
+ use_safetensors=True
150
+ )
151
+ vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True)
152
+ pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
153
+ "stabilityai/stable-diffusion-xl-base-1.0",
154
+ controlnet=controlnet,
155
+ vae=vae,
156
+ torch_dtype=torch.float16,
157
+ use_safetensors=True
158
+ )
159
+ pipe.enable_model_cpu_offload() Now pass your prompt (and optionally a negative prompt if you’re using one) and canny image to the pipeline: The controlnet_conditioning_scale parameter determines how much weight to assign to the conditioning inputs. A value of 0.5 is recommended for good generalization, but feel free to experiment with this number! Copied prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
160
+ negative_prompt = 'low quality, bad quality, sketches'
161
+
162
+ image = pipe(
163
+ prompt,
164
+ negative_prompt=negative_prompt,
165
+ image=canny_image,
166
+ controlnet_conditioning_scale=0.5,
167
+ ).images[0]
168
+ make_image_grid([original_image, canny_image, image], rows=1, cols=3) You can use StableDiffusionXLControlNetPipeline in guess mode as well by setting the parameter to True: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL
169
+ from diffusers.utils import load_image, make_image_grid
170
+ import numpy as np
171
+ import torch
172
+ import cv2
173
+ from PIL import Image
174
+
175
+ prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
176
+ negative_prompt = "low quality, bad quality, sketches"
177
+
178
+ original_image = load_image(
179
+ "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
180
+ )
181
+
182
+ controlnet = ControlNetModel.from_pretrained(
183
+ "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True
184
+ )
185
+ vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True)
186
+ pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
187
+ "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors=True
188
+ )
189
+ pipe.enable_model_cpu_offload()
190
+
191
+ image = np.array(original_image)
192
+ image = cv2.Canny(image, 100, 200)
193
+ image = image[:, :, None]
194
+ image = np.concatenate([image, image, image], axis=2)
195
+ canny_image = Image.fromarray(image)
196
+
197
+ image = pipe(
198
+ prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=0.5, image=canny_image, guess_mode=True,
199
+ ).images[0]
200
+ make_image_grid([original_image, canny_image, image], rows=1, cols=3) MultiControlNet Replace the SDXL model with a model like runwayml/stable-diffusion-v1-5 to use multiple conditioning inputs with Stable Diffusion models. You can compose multiple ControlNet conditionings from different image inputs to create a MultiControlNet. To get better results, it is often helpful to: mask conditionings such that they don’t overlap (for example, mask the area of a canny image where the pose conditioning is located) experiment with the controlnet_conditioning_scale parameter to determine how much weight to assign to each conditioning input In this example, you’ll combine a canny image and a human pose estimation image to generate a new image. Prepare the canny image conditioning: Copied from diffusers.utils import load_image, make_image_grid
201
+ from PIL import Image
202
+ import numpy as np
203
+ import cv2
204
+
205
+ original_image = load_image(
206
+ "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"
207
+ )
208
+ image = np.array(original_image)
209
+
210
+ low_threshold = 100
211
+ high_threshold = 200
212
+
213
+ image = cv2.Canny(image, low_threshold, high_threshold)
214
+
215
+ # zero out middle columns of image where pose will be overlaid
216
+ zero_start = image.shape[1] // 4
217
+ zero_end = zero_start + image.shape[1] // 2
218
+ image[:, zero_start:zero_end] = 0
219
+
220
+ image = image[:, :, None]
221
+ image = np.concatenate([image, image, image], axis=2)
222
+ canny_image = Image.fromarray(image)
223
+ make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image For human pose estimation, install controlnet_aux: Copied # uncomment to install the necessary library in Colab
224
+ #!pip install -q controlnet-aux Prepare the human pose estimation conditioning: Copied from controlnet_aux import OpenposeDetector
225
+
226
+ openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet")
227
+ original_image = load_image(
228
+ "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png"
229
+ )
230
+ openpose_image = openpose(original_image)
231
+ make_image_grid([original_image, openpose_image], rows=1, cols=2) original image human pose image Load a list of ControlNet models that correspond to each conditioning, and pass them to the StableDiffusionXLControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to reduce memory usage. Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL, UniPCMultistepScheduler
232
+ import torch
233
+
234
+ controlnets = [
235
+ ControlNetModel.from_pretrained(
236
+ "thibaud/controlnet-openpose-sdxl-1.0", torch_dtype=torch.float16
237
+ ),
238
+ ControlNetModel.from_pretrained(
239
+ "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True
240
+ ),
241
+ ]
242
+
243
+ vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True)
244
+ pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
245
+ "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnets, vae=vae, torch_dtype=torch.float16, use_safetensors=True
246
+ )
247
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
248
+ pipe.enable_model_cpu_offload() Now you can pass your prompt (an optional negative prompt if you’re using one), canny image, and pose image to the pipeline: Copied prompt = "a giant standing in a fantasy landscape, best quality"
249
+ negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality"
250
+
251
+ generator = torch.manual_seed(1)
252
+
253
+ images = [openpose_image.resize((1024, 1024)), canny_image.resize((1024, 1024))]
254
+
255
+ images = pipe(
256
+ prompt,
257
+ image=images,
258
+ num_inference_steps=25,
259
+ generator=generator,
260
+ negative_prompt=negative_prompt,
261
+ num_images_per_prompt=3,
262
+ controlnet_conditioning_scale=[1.0, 0.8],
263
+ ).images
264
+ make_image_grid([original_image, canny_image, openpose_image,
265
+ images[0].resize((512, 512)), images[1].resize((512, 512)), images[2].resize((512, 512))], rows=2, cols=3)
scrapped_outputs/03a8acbaedc64b38f5af066e6bbee2e3.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ Using Diffusers with other modalities
2
+
3
+ Diffusers is in the process of expanding to modalities other than images.
4
+ Example type
5
+ Colab
6
+ Pipeline
7
+ Molecule conformation generation
8
+
9
+ ❌
10
+ More coming soon!
scrapped_outputs/041d6ec5bc898d377b96ad1c3e5ce22b.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of πŸ€— Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors.
scrapped_outputs/04343d970e3a9bf96cf88b007a727277.txt ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Token merging Token merging (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of StableDiffusionPipeline. Install ToMe from pip: Copied pip install tomesd You can use ToMe from the tomesd library with the apply_patch function: Copied from diffusers import StableDiffusionPipeline
2
+ import torch
3
+ import tomesd
4
+
5
+ pipeline = StableDiffusionPipeline.from_pretrained(
6
+ "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True,
7
+ ).to("cuda")
8
+ + tomesd.apply_patch(pipeline, ratio=0.5)
9
+
10
+ image = pipeline("a photo of an astronaut riding a horse on mars").images[0] The apply_patch function exposes a number of arguments to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is ratio which controls the number of tokens that are merged during the forward pass. As reported in the paper, ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the ratio, you can speed-up inference even further, but at the cost of some degraded image quality. To test the quality of the generated images, we sampled a few prompts from Parti Prompts and performed inference with the StableDiffusionPipeline with the following settings: We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this WandB report. If you’re interested in reproducing this experiment, use this script. Benchmarks We also benchmarked the impact of tomesd on the StableDiffusionPipeline with xFormers enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment: Copied - `diffusers` version: 0.15.1
11
+ - Python version: 3.8.16
12
+ - PyTorch version (GPU?): 1.13.1+cu116 (True)
13
+ - Huggingface_hub version: 0.13.2
14
+ - Transformers version: 4.27.2
15
+ - Accelerate version: 0.18.0
16
+ - xFormers version: 0.0.16
17
+ - tomesd version: 0.1.2 To reproduce this benchmark, feel free to use this script. The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. GPU Resolution Batch size Vanilla ToMe ToMe + xFormers A100 512 10 6.88 5.26 (+23.55%) 4.69 (+31.83%) 768 10 OOM 14.71 11 8 OOM 11.56 8.84 4 OOM 5.98 4.66 2 4.99 3.24 (+35.07%) 2.1 (+37.88%) 1 3.29 2.24 (+31.91%) 2.03 (+38.3%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM 12.51 9.09 2 OOM 6.52 4.96 1 6.4 3.61 (+43.59%) 2.81 (+56.09%) V100 512 10 OOM 10.03 9.29 8 OOM 8.05 7.47 4 5.7 4.3 (+24.56%) 3.98 (+30.18%) 2 3.14 2.43 (+22.61%) 2.27 (+27.71%) 1 1.88 1.57 (+16.49%) 1.57 (+16.49%) 768 10 OOM OOM 23.67 8 OOM OOM 18.81 4 OOM 11.81 9.7 2 OOM 6.27 5.2 1 5.43 3.38 (+37.75%) 2.82 (+48.07%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM OOM 19.35 2 OOM 13 10.78 1 OOM 6.66 5.54 As seen in the tables above, the speed-up from tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with torch.compile.
scrapped_outputs/044358532f240b4e1a89ecfcec43efdc.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of πŸ€— Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors.
scrapped_outputs/04532fa8bf4664942bca163e9ce7d3af.txt ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Installation πŸ€— Diffusers is tested on Python 3.8+, PyTorch 1.7.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: PyTorch installation instructions Flax installation instructions Install with pip You should install πŸ€— Diffusers in a virtual environment.
2
+ If you’re unfamiliar with Python virtual environments, take a look at this guide.
3
+ A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: Copied python -m venv .env Activate the virtual environment: Copied source .env/bin/activate You should also install πŸ€— Transformers because πŸ€— Diffusers relies on its models: Pytorch Hide Pytorch content Note - PyTorch only supports Python 3.8 - 3.11 on Windows. Copied pip install diffusers["torch"] transformers JAX Hide JAX content Copied pip install diffusers["flax"] transformers Install with conda After activating your virtual environment, with conda (maintained by the community): Copied conda install -c conda-forge diffusers Install from source Before installing πŸ€— Diffusers from source, make sure you have PyTorch and πŸ€— Accelerate installed. To install πŸ€— Accelerate: Copied pip install accelerate Then install πŸ€— Diffusers from source: Copied pip install git+https://github.com/huggingface/diffusers This command installs the bleeding edge main version rather than the latest stable version.
4
+ The main version is useful for staying up-to-date with the latest developments.
5
+ For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet.
6
+ However, this means the main version may not always be stable.
7
+ We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day.
8
+ If you run into a problem, please open an Issue so we can fix it even sooner! Editable install You will need an editable install if you’d like to: Use the main version of the source code. Contribute to πŸ€— Diffusers and need to test changes in the code. Clone the repository and install πŸ€— Diffusers with the following commands: Copied git clone https://github.com/huggingface/diffusers.git
9
+ cd diffusers Pytorch Hide Pytorch content Copied pip install -e ".[torch]" JAX Hide JAX content Copied pip install -e ".[flax]" These commands will link the folder you cloned the repository to and your Python library paths.
10
+ Python will now look inside the folder you cloned to in addition to the normal library paths.
11
+ For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.8/site-packages/, Python will also search the ~/diffusers/ folder you cloned to. You must keep the diffusers folder if you want to keep using the library. Now you can easily update your clone to the latest version of πŸ€— Diffusers with the following command: Copied cd ~/diffusers/
12
+ git pull Your Python environment will find the main version of πŸ€— Diffusers on the next run. Cache Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the HF_HOME or HUGGINFACE_HUB_CACHE environment variables or configuring the cache_dir parameter in methods like from_pretrained(). Cached files allow you to run πŸ€— Diffusers offline. To prevent πŸ€— Diffusers from connecting to the internet, set the HF_HUB_OFFLINE environment variable to True and πŸ€— Diffusers will only load previously downloaded files in the cache. Copied export HF_HUB_OFFLINE=True For more details about managing and cleaning the cache, take a look at the caching guide. Telemetry logging Our library gathers telemetry information during from_pretrained() requests.
13
+ The data gathered includes the version of πŸ€— Diffusers and PyTorch/Flax, the requested model or pipeline class,
14
+ and the path to a pretrained checkpoint if it is hosted on the Hugging Face Hub.
15
+ This usage data helps us debug issues and prioritize new features.
16
+ Telemetry is only sent when loading models and pipelines from the Hub,
17
+ and it is not collected if you’re loading local files. We understand that not everyone wants to share additional information,and we respect your privacy.
18
+ You can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: On Linux/MacOS: Copied export DISABLE_TELEMETRY=YES On Windows: Copied set DISABLE_TELEMETRY=YES
scrapped_outputs/04863d9d6a0a778c9d89bfaf5c722799.txt ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch
2
+ from diffusers import DiffusionPipeline, AutoencoderTiny
3
+
4
+ pipe = DiffusionPipeline.from_pretrained(
5
+ "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16
6
+ )
7
+ pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16)
8
+ pipe = pipe.to("cuda")
9
+
10
+ prompt = "slice of delicious New York-style berry cheesecake"
11
+ image = pipe(prompt, num_inference_steps=25).images[0]
12
+ image To use with Stable Diffusion XL 1.0 Copied import torch
13
+ from diffusers import DiffusionPipeline, AutoencoderTiny
14
+
15
+ pipe = DiffusionPipeline.from_pretrained(
16
+ "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
17
+ )
18
+ pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16)
19
+ pipe = pipe.to("cuda")
20
+
21
+ prompt = "slice of delicious New York-style berry cheesecake"
22
+ image = pipe(prompt, num_inference_steps=25).images[0]
23
+ image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) β€” Number of channels in the input image. out_channels (int, optional, defaults to 3) β€” Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) β€”
24
+ Tuple of integers representing the number of output channels for each encoder block. The length of the
25
+ tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) β€”
26
+ Tuple of integers representing the number of output channels for each decoder block. The length of the
27
+ tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") β€”
28
+ Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) β€”
29
+ Number of channels in the latent representation. The latent space acts as a compressed representation of
30
+ the input image. upsampling_scaling_factor (int, optional, defaults to 2) β€”
31
+ Scaling factor for upsampling in the decoder. It determines the size of the output image during the
32
+ upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) β€”
33
+ Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The
34
+ length of the tuple should be equal to the number of stages in the encoder. Each stage has a different
35
+ number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) β€”
36
+ Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The
37
+ length of the tuple should be equal to the number of stages in the decoder. Each stage has a different
38
+ number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) β€”
39
+ Magnitude of the latent representation. This parameter scales the latent representation values to control
40
+ the extent of information preservation. latent_shift (float, optional, defaults to 0.5) β€”
41
+ Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) β€”
42
+ The component-wise standard deviation of the trained latent space computed using the first batch of the
43
+ training set. This is used to scale the latent space to have unit variance when training the diffusion
44
+ model. The latents are scaled with the formula z = z * scaling_factor before being passed to the
45
+ diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image
46
+ Synthesis with Latent Diffusion Models paper. For this Autoencoder,
47
+ however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) β€”
48
+ If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
49
+ can be fine-tuned / trained to a lower range without losing too much precision, in which case
50
+ force_upcast can be set to False (see this fp16-friendly
51
+ AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for
52
+ all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing
53
+ decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing
54
+ decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
55
+ compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
56
+ compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
57
+ processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) β€” Input sample. return_dict (bool, optional, defaults to True) β€”
58
+ Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) β€” Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method.
scrapped_outputs/04a5c43352cba1852d9743227a5502ec.txt ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Installing xFormers
2
+
3
+ We recommend the use of xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption.
4
+ Starting from version 0.0.16 of xFormers, released on January 2023, installation can be easily performed using pre-built pip wheels:
5
+
6
+
7
+ Copied
8
+ pip install xformers
9
+ The xFormers PIP package requires the latest version of PyTorch (1.13.1 as of xFormers 0.0.16). If you need to use a previous version of PyTorch, then we recommend you install xFormers from source using the project instructions.
10
+ After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption, as discussed here.
11
+ According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or Dreambooth) in some GPUs. If you observe that problem, please install a development version as indicated in that comment.
scrapped_outputs/04b6c971d3b3042cb398245d60d142af.txt ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text
2
+ encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra
3
+ learnable key and value matrices for the text encoder. CrossFrameAttnProcessor class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor < source > ( batch_size = 2 ) Cross frame attention processor. Each frame attends the first frame. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) β€”
4
+ Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) β€”
5
+ Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) β€”
6
+ The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
7
+ The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) β€”
8
+ Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) β€”
9
+ The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) β€”
10
+ Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) β€”
11
+ Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) β€”
12
+ The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
13
+ The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) β€”
14
+ Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) β€”
15
+ The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled
16
+ dot-product attention. CustomDiffusionXFormersAttnProcessor class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None ) Parameters train_kv (bool, defaults to True) β€”
17
+ Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) β€”
18
+ Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) β€”
19
+ The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
20
+ The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) β€”
21
+ Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) β€”
22
+ The dropout probability to use. attention_op (Callable, optional, defaults to None) β€”
23
+ The base
24
+ operator to use
25
+ as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0).
26
+ It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query,
27
+ key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently πŸ§ͺ experimental in nature and can change in future. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) β€”
28
+ The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) β€”
29
+ The number of channels in the encoder_hidden_states. rank (int, defaults to 4) β€”
30
+ The dimension of the LoRA update matrices. network_alpha (int, optional) β€”
31
+ Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) β€”
32
+ Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text
33
+ encoder. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) β€”
34
+ The hidden size of the attention layer. cross_attention_dim (int, optional) β€”
35
+ The number of channels in the encoder_hidden_states. rank (int, defaults to 4) β€”
36
+ The dimension of the LoRA update matrices. attention_op (Callable, optional, defaults to None) β€”
37
+ The base
38
+ operator to
39
+ use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best
40
+ operator. network_alpha (int, optional) β€”
41
+ Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) β€”
42
+ Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. SlicedAttnProcessor class diffusers.models.attention_processor.SlicedAttnProcessor < source > ( slice_size: int ) Parameters slice_size (int, optional) β€”
43
+ The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and
44
+ attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention. SlicedAttnAddedKVProcessor class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor < source > ( slice_size ) Parameters slice_size (int, optional) β€”
45
+ The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and
46
+ attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) β€”
47
+ The base
48
+ operator to
49
+ use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best
50
+ operator. Processor for implementing memory efficient attention using xFormers.
scrapped_outputs/0513b0801d8c780910edb8268d9b7b3b.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ SDXL Turbo Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Tips SDXL Turbo uses the exact same architecture as SDXL, which means it also has the same API. Please refer to the SDXL API reference for more details. SDXL Turbo should disable guidance scale by setting guidance_scale=0.0. SDXL Turbo should use timestep_spacing='trailing' for the scheduler and use between 1 and 4 steps. SDXL Turbo has been trained to generate images of size 512x512. SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the official model card to learn more. To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints!
scrapped_outputs/05377f15590571c32cefbc2656f68eeb.txt ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab
2
+ #!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt. These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears, then: Copied source_prompt = "a bowl of fruits"
3
+ target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt, but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch
4
+ from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline
5
+
6
+ pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
7
+ "stabilityai/stable-diffusion-2-1",
8
+ torch_dtype=torch.float16,
9
+ safety_checker=None,
10
+ use_safetensors=True,
11
+ )
12
+ pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
13
+ pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
14
+ pipeline.enable_model_cpu_offload()
15
+ pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid
16
+
17
+ img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
18
+ raw_image = load_image(img_url).resize((768, 768))
19
+ raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image
20
+
21
+ source_prompt = "a bowl of fruits"
22
+ target_prompt = "a basket of pears"
23
+ mask_image = pipeline.generate_mask(
24
+ image=raw_image,
25
+ source_prompt=source_prompt,
26
+ target_prompt=target_prompt,
27
+ )
28
+ Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt: Copied output_image = pipeline(
29
+ prompt=target_prompt,
30
+ mask_image=mask_image,
31
+ image_latents=inv_latents,
32
+ negative_prompt=source_prompt,
33
+ ).images[0]
34
+ mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768))
35
+ make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the πŸ€— Transformers library: Copied import torch
36
+ from transformers import AutoTokenizer, T5ForConditionalGeneration
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large")
39
+ model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl"
40
+ target_concept = "basket"
41
+
42
+ source_text = f"Provide a caption for images containing a {source_concept}. "
43
+ "The captions should be in English and should be no longer than 150 characters."
44
+
45
+ target_text = f"Provide a caption for images containing a {target_concept}. "
46
+ "The captions should be in English and should be no longer than 150 characters." Next, create a utility function to generate the prompts: Copied @torch.no_grad()
47
+ def generate_prompts(input_prompt):
48
+ input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda")
49
+
50
+ outputs = model.generate(
51
+ input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10
52
+ )
53
+ return tokenizer.batch_decode(outputs, skip_special_tokens=True)
54
+
55
+ source_prompts = generate_prompts(source_text)
56
+ target_prompts = generate_prompts(target_text)
57
+ print(source_prompts)
58
+ print(target_prompts) Check out the generation strategy guide if you’re interested in learning more about strategies for generating different quality text. Load the text encoder model used by the StableDiffusionDiffEditPipeline to encode the text. You’ll use the text encoder to compute the text embeddings: Copied import torch
59
+ from diffusers import StableDiffusionDiffEditPipeline
60
+
61
+ pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
62
+ "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True
63
+ )
64
+ pipeline.enable_model_cpu_offload()
65
+ pipeline.enable_vae_slicing()
66
+
67
+ @torch.no_grad()
68
+ def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"):
69
+ embeddings = []
70
+ for sent in sentences:
71
+ text_inputs = tokenizer(
72
+ sent,
73
+ padding="max_length",
74
+ max_length=tokenizer.model_max_length,
75
+ truncation=True,
76
+ return_tensors="pt",
77
+ )
78
+ text_input_ids = text_inputs.input_ids
79
+ prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0]
80
+ embeddings.append(prompt_embeds)
81
+ return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0)
82
+
83
+ source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder)
84
+ target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder) Finally, pass the embeddings to the generate_mask() and invert() functions, and pipeline to generate the image: Copied from diffusers import DDIMInverseScheduler, DDIMScheduler
85
+ from diffusers.utils import load_image, make_image_grid
86
+ from PIL import Image
87
+
88
+ pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
89
+ pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
90
+
91
+ img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
92
+ raw_image = load_image(img_url).resize((768, 768))
93
+
94
+ mask_image = pipeline.generate_mask(
95
+ image=raw_image,
96
+ - source_prompt=source_prompt,
97
+ - target_prompt=target_prompt,
98
+ + source_prompt_embeds=source_embeds,
99
+ + target_prompt_embeds=target_embeds,
100
+ )
101
+
102
+ inv_latents = pipeline.invert(
103
+ - prompt=source_prompt,
104
+ + prompt_embeds=source_embeds,
105
+ image=raw_image,
106
+ ).latents
107
+
108
+ output_image = pipeline(
109
+ mask_image=mask_image,
110
+ image_latents=inv_latents,
111
+ - prompt=target_prompt,
112
+ - negative_prompt=source_prompt,
113
+ + prompt_embeds=target_embeds,
114
+ + negative_prompt_embeds=source_embeds,
115
+ ).images[0]
116
+ mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L")
117
+ make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) Generate a caption for inversion While you can use the source_prompt as a caption to help generate the partially inverted latents, you can also use the BLIP model to automatically generate a caption. Load the BLIP model and processor from the πŸ€— Transformers library: Copied import torch
118
+ from transformers import BlipForConditionalGeneration, BlipProcessor
119
+
120
+ processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
121
+ model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True) Create a utility function to generate a caption from the input image: Copied @torch.no_grad()
122
+ def generate_caption(images, caption_generator, caption_processor):
123
+ text = "a photograph of"
124
+
125
+ inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype)
126
+ caption_generator.to("cuda")
127
+ outputs = caption_generator.generate(**inputs, max_new_tokens=128)
128
+
129
+ # offload caption generator
130
+ caption_generator.to("cpu")
131
+
132
+ caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0]
133
+ return caption Load an input image and generate a caption for it using the generate_caption function: Copied from diffusers.utils import load_image
134
+
135
+ img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
136
+ raw_image = load_image(img_url).resize((768, 768))
137
+ caption = generate_caption(raw_image, model, processor) generated caption: "a photograph of a bowl of fruit on a table" Now you can drop the caption into the invert() function to generate the partially inverted latents!
scrapped_outputs/05582e67bfcec7fa9b41e4219522b5e8.txt ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Shap-E Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps: an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications This guide will show you how to use Shap-E to start generating your own 3D assets! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab
2
+ #!pip install -q diffusers transformers accelerate trimesh Text-to-3D To generate a gif of a 3D object, pass a text prompt to the ShapEPipeline. The pipeline generates a list of image frames which are used to create the 3D object. Copied import torch
3
+ from diffusers import ShapEPipeline
4
+
5
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
6
+
7
+ pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16")
8
+ pipe = pipe.to(device)
9
+
10
+ guidance_scale = 15.0
11
+ prompt = ["A firecracker", "A birthday cupcake"]
12
+
13
+ images = pipe(
14
+ prompt,
15
+ guidance_scale=guidance_scale,
16
+ num_inference_steps=64,
17
+ frame_size=256,
18
+ ).images Now use the export_to_gif() function to turn the list of image frames into a gif of the 3D object. Copied from diffusers.utils import export_to_gif
19
+
20
+ export_to_gif(images[0], "firecracker_3d.gif")
21
+ export_to_gif(images[1], "cake_3d.gif") prompt = "A firecracker" prompt = "A birthday cupcake" Image-to-3D To generate a 3D object from another image, use the ShapEImg2ImgPipeline. You can use an existing image or generate an entirely new one. Let’s use the Kandinsky 2.1 model to generate a new image. Copied from diffusers import DiffusionPipeline
22
+ import torch
23
+
24
+ prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
25
+ pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
26
+
27
+ prompt = "A cheeseburger, white background"
28
+
29
+ image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple()
30
+ image = pipeline(
31
+ prompt,
32
+ image_embeds=image_embeds,
33
+ negative_image_embeds=negative_image_embeds,
34
+ ).images[0]
35
+
36
+ image.save("burger.png") Pass the cheeseburger to the ShapEImg2ImgPipeline to generate a 3D representation of it. Copied from PIL import Image
37
+ from diffusers import ShapEImg2ImgPipeline
38
+ from diffusers.utils import export_to_gif
39
+
40
+ pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda")
41
+
42
+ guidance_scale = 3.0
43
+ image = Image.open("burger.png").resize((256, 256))
44
+
45
+ images = pipe(
46
+ image,
47
+ guidance_scale=guidance_scale,
48
+ num_inference_steps=64,
49
+ frame_size=256,
50
+ ).images
51
+
52
+ gif_path = export_to_gif(images[0], "burger_3d.gif") cheeseburger 3D cheeseburger Generate mesh Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you’ll convert the output into a glb file because the πŸ€— Datasets library supports mesh visualization of glb files which can be rendered by the Dataset viewer. You can generate mesh outputs for both the ShapEPipeline and ShapEImg2ImgPipeline by specifying the output_type parameter as "mesh": Copied import torch
53
+ from diffusers import ShapEPipeline
54
+
55
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
56
+
57
+ pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16")
58
+ pipe = pipe.to(device)
59
+
60
+ guidance_scale = 15.0
61
+ prompt = "A birthday cupcake"
62
+
63
+ images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images Use the export_to_ply() function to save the mesh output as a ply file: You can optionally save the mesh output as an obj file with the export_to_obj() function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! Copied from diffusers.utils import export_to_ply
64
+
65
+ ply_path = export_to_ply(images[0], "3d_cake.ply")
66
+ print(f"Saved to folder: {ply_path}") Then you can convert the ply file to a glb file with the trimesh library: Copied import trimesh
67
+
68
+ mesh = trimesh.load("3d_cake.ply")
69
+ mesh_export = mesh.export("3d_cake.glb", file_type="glb") By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: Copied import trimesh
70
+ import numpy as np
71
+
72
+ mesh = trimesh.load("3d_cake.ply")
73
+ rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0])
74
+ mesh = mesh.apply_transform(rot)
75
+ mesh_export = mesh.export("3d_cake.glb", file_type="glb") Upload the mesh file to your dataset repository to visualize it with the Dataset viewer!
scrapped_outputs/0563c13a7c1c4c7bf534f8ba98328463.txt ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao.
2
+ This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) β€”
3
+ The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) β€”
4
+ The starting beta value of inference. beta_end (float, defaults to 0.02) β€”
5
+ The final beta value. beta_schedule (str, defaults to "linear") β€”
6
+ The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
7
+ linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) β€”
8
+ Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) β€”
9
+ The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we
10
+ will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) β€”
11
+ Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) β€”
12
+ The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) β€”
13
+ Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
14
+ there is no previous alpha. When this option is True the previous alpha product is fixed to 1,
15
+ otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) β€”
16
+ An offset added to the inference steps. You can use a combination of offset=1 and
17
+ set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable
18
+ Diffusion. prediction_type (str, defaults to epsilon, optional) β€”
19
+ Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process),
20
+ sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen
21
+ Video paper). thresholding (bool, defaults to False) β€”
22
+ Whether to use the β€œdynamic thresholding” method. This is unsuitable for latent-space diffusion models such
23
+ as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) β€”
24
+ The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) β€”
25
+ The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") β€”
26
+ The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and
27
+ Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) β€”
28
+ The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions
29
+ c_skip and c_out. Increasing this will decrease the approximation error (although the approximation
30
+ error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) β€”
31
+ Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
32
+ dark samples instead of limiting it to samples with medium brightness. Loosely related to
33
+ --offset_noise. LCMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with
34
+ non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config
35
+ attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be
36
+ accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving
37
+ functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) β†’ torch.FloatTensor Parameters sample (torch.FloatTensor) β€”
38
+ The input sample. timestep (int, optional) β€”
39
+ The current timestep in the diffusion chain. Returns
40
+ torch.FloatTensor
41
+
42
+ A scaled input sample.
43
+ Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
44
+ current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) β€”
45
+ The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) β€”
46
+ The number of diffusion steps used when generating samples with a pre-trained model. If used,
47
+ timesteps must be None. device (str or torch.device, optional) β€”
48
+ The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) β€”
49
+ The original number of inference steps, which will be used to generate a linearly-spaced timestep
50
+ schedule (which is different from the standard diffusers implementation). We will then take
51
+ num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as
52
+ our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) β€”
53
+ Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default
54
+ timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep
55
+ schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) β†’ ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β€”
56
+ The direct output from learned diffusion model. timestep (float) β€”
57
+ The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β€”
58
+ A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) β€”
59
+ A random number generator. return_dict (bool, optional, defaults to True) β€”
60
+ Whether or not to return a LCMSchedulerOutput or tuple. Returns
61
+ ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple
62
+
63
+ If return_dict is True, LCMSchedulerOutput is returned, otherwise a
64
+ tuple is returned where the first element is the sample tensor.
65
+ Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
66
+ process from the learned model outputs (most often the predicted noise).
scrapped_outputs/056988b6242e71f9baa34a0128b3b910.txt ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Stable Diffusion 2 Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of the original Stable Diffusion, and it was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels.
2
+ These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official announcement post. The architecture of Stable Diffusion 2 is more or less identical to the original Stable Diffusion model so check out it’s API documentation for how to use Stable Diffusion 2. We recommend using the DPMSolverMultistepScheduler as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: Task Repository text-to-image (512x512) stabilityai/stable-diffusion-2-base text-to-image (768x768) stabilityai/stable-diffusion-2 inpainting stabilityai/stable-diffusion-2-inpainting super-resolution stable-diffusion-x4-upscaler depth-to-image stabilityai/stable-diffusion-2-depth Here are some examples for how to use Stable Diffusion 2 for each task: Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! Text-to-image Copied from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
3
+ import torch
4
+
5
+ repo_id = "stabilityai/stable-diffusion-2-base"
6
+ pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
7
+
8
+ pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
9
+ pipe = pipe.to("cuda")
10
+
11
+ prompt = "High quality photo of an astronaut riding a horse in space"
12
+ image = pipe(prompt, num_inference_steps=25).images[0]
13
+ image Inpainting Copied import torch
14
+ from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
15
+ from diffusers.utils import load_image, make_image_grid
16
+
17
+ img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
18
+ mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
19
+
20
+ init_image = load_image(img_url).resize((512, 512))
21
+ mask_image = load_image(mask_url).resize((512, 512))
22
+
23
+ repo_id = "stabilityai/stable-diffusion-2-inpainting"
24
+ pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
25
+
26
+ pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
27
+ pipe = pipe.to("cuda")
28
+
29
+ prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
30
+ image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0]
31
+ make_image_grid([init_image, mask_image, image], rows=1, cols=3) Super-resolution Copied from diffusers import StableDiffusionUpscalePipeline
32
+ from diffusers.utils import load_image, make_image_grid
33
+ import torch
34
+
35
+ # load model and scheduler
36
+ model_id = "stabilityai/stable-diffusion-x4-upscaler"
37
+ pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16)
38
+ pipeline = pipeline.to("cuda")
39
+
40
+ # let's download an image
41
+ url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png"
42
+ low_res_img = load_image(url)
43
+ low_res_img = low_res_img.resize((128, 128))
44
+ prompt = "a white cat"
45
+ upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0]
46
+ make_image_grid([low_res_img.resize((512, 512)), upscaled_image.resize((512, 512))], rows=1, cols=2) Depth-to-image Copied import torch
47
+ from diffusers import StableDiffusionDepth2ImgPipeline
48
+ from diffusers.utils import load_image, make_image_grid
49
+
50
+ pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
51
+ "stabilityai/stable-diffusion-2-depth",
52
+ torch_dtype=torch.float16,
53
+ ).to("cuda")
54
+
55
+
56
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
57
+ init_image = load_image(url)
58
+ prompt = "two tigers"
59
+ negative_prompt = "bad, deformed, ugly, bad anotomy"
60
+ image = pipe(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0]
61
+ make_image_grid([init_image, image], rows=1, cols=2)
scrapped_outputs/0571ee854112d412f8b230bbf015c40b.txt ADDED
File without changes
scrapped_outputs/0589ba813ef6923277cca7ee6b454f67.txt ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Single files Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a ckpt or safetensors file. These single file types are typically produced from community trained models. There are three classes for loading single file weights: FromSingleFileMixin supports loading pretrained pipeline weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalVAEMixin supports loading a pretrained AutoencoderKL from pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalControlnetMixin supports loading pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. To learn more about how to load single file weights, see the Load different Stable Diffusion formats loading guide. FromSingleFileMixin class diffusers.loaders.FromSingleFileMixin < source > ( ) Load model weights saved in the .ckpt format into a DiffusionPipeline. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) β€”
2
+ Can be either:
3
+ A link to the .ckpt file (for example
4
+ "https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt") on the Hub.
5
+ A path to a file containing all pipeline weights.
6
+ torch_dtype (str or torch.dtype, optional) β€”
7
+ Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the
8
+ dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) β€”
9
+ Whether or not to force the (re-)download of the model weights and configuration files, overriding the
10
+ cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) β€”
11
+ Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
12
+ is not used. resume_download (bool, optional, defaults to False) β€”
13
+ Whether or not to resume downloading the model weights and configuration files. If set to False, any
14
+ incompletely downloaded files are deleted. proxies (Dict[str, str], optional) β€”
15
+ A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) β€”
16
+ Whether to only load local model weights and configuration files or not. If set to True, the model
17
+ won’t be downloaded from the Hub. token (str or bool, optional) β€”
18
+ The token to use as HTTP bearer authorization for remote files. If True, the token generated from
19
+ diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") β€”
20
+ The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
21
+ allowed by Git. use_safetensors (bool, optional, defaults to None) β€”
22
+ If set to None, the safetensors weights are downloaded if they’re available and if the
23
+ safetensors library is installed. If set to True, the model is forcibly loaded from safetensors
24
+ weights. If set to False, safetensors weights are not loaded. extract_ema (bool, optional, defaults to False) β€”
25
+ Whether to extract the EMA weights or not. Pass True to extract the EMA weights which usually yield
26
+ higher quality images for inference. Non-EMA weights are usually better for continuing finetuning. upcast_attention (bool, optional, defaults to None) β€”
27
+ Whether the attention computation should always be upcasted. image_size (int, optional, defaults to 512) β€”
28
+ The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable
29
+ Diffusion v2 base model. Use 768 for Stable Diffusion v2. prediction_type (str, optional) β€”
30
+ The prediction type the model was trained on. Use 'epsilon' for all Stable Diffusion v1 models and
31
+ the Stable Diffusion v2 base model. Use 'v_prediction' for Stable Diffusion v2. num_in_channels (int, optional, defaults to None) β€”
32
+ The number of input channels. If None, it is automatically inferred. scheduler_type (str, optional, defaults to "pndm") β€”
33
+ Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. load_safety_checker (bool, optional, defaults to True) β€”
34
+ Whether to load the safety checker or not. text_encoder (CLIPTextModel, optional, defaults to None) β€”
35
+ An instance of CLIPTextModel to use, specifically the
36
+ clip-vit-large-patch14 variant. If this
37
+ parameter is None, the function loads a new instance of CLIPTextModel by itself if needed. vae (AutoencoderKL, optional, defaults to None) β€”
38
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. If
39
+ this parameter is None, the function will load a new instance of [CLIP] by itself, if needed. tokenizer (CLIPTokenizer, optional, defaults to None) β€”
40
+ An instance of CLIPTokenizer to use. If this parameter is None, the function loads a new instance
41
+ of CLIPTokenizer by itself if needed. original_config_file (str) β€”
42
+ Path to .yaml config file corresponding to the original architecture. If None, will be
43
+ automatically inferred by looking for a key that only exists in SD2.0 models. kwargs (remaining dictionary of keyword arguments, optional) β€”
44
+ Can be used to overwrite load and saveable variables (for example the pipeline components of the
45
+ specific pipeline class). The overwritten components are directly passed to the pipelines __init__
46
+ method. See example below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors
47
+ format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline
48
+
49
+ >>> # Download pipeline from huggingface.co and cache.
50
+ >>> pipeline = StableDiffusionPipeline.from_single_file(
51
+ ... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors"
52
+ ... )
53
+
54
+ >>> # Download pipeline from local file
55
+ >>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt
56
+ >>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly")
57
+
58
+ >>> # Enable float16 and move to GPU
59
+ >>> pipeline = StableDiffusionPipeline.from_single_file(
60
+ ... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt",
61
+ ... torch_dtype=torch.float16,
62
+ ... )
63
+ >>> pipeline.to("cuda") FromOriginalVAEMixin class diffusers.loaders.FromOriginalVAEMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into an AutoencoderKL. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) β€”
64
+ Can be either:
65
+ A link to the .ckpt file (for example
66
+ "https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt") on the Hub.
67
+ A path to a file containing all pipeline weights.
68
+ torch_dtype (str or torch.dtype, optional) β€”
69
+ Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the
70
+ dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) β€”
71
+ Whether or not to force the (re-)download of the model weights and configuration files, overriding the
72
+ cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) β€”
73
+ Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
74
+ is not used. resume_download (bool, optional, defaults to False) β€”
75
+ Whether or not to resume downloading the model weights and configuration files. If set to False, any
76
+ incompletely downloaded files are deleted. proxies (Dict[str, str], optional) β€”
77
+ A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) β€”
78
+ Whether to only load local model weights and configuration files or not. If set to True, the model
79
+ won’t be downloaded from the Hub. token (str or bool, optional) β€”
80
+ The token to use as HTTP bearer authorization for remote files. If True, the token generated from
81
+ diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") β€”
82
+ The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
83
+ allowed by Git. image_size (int, optional, defaults to 512) β€”
84
+ The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable
85
+ Diffusion v2 base model. Use 768 for Stable Diffusion v2. use_safetensors (bool, optional, defaults to None) β€”
86
+ If set to None, the safetensors weights are downloaded if they’re available and if the
87
+ safetensors library is installed. If set to True, the model is forcibly loaded from safetensors
88
+ weights. If set to False, safetensors weights are not loaded. upcast_attention (bool, optional, defaults to None) β€”
89
+ Whether the attention computation should always be upcasted. scaling_factor (float, optional, defaults to 0.18215) β€”
90
+ The component-wise standard deviation of the trained latent space computed using the first batch of the
91
+ training set. This is used to scale the latent space to have unit variance when training the diffusion
92
+ model. The latents are scaled with the formula z = z * scaling_factor before being passed to the
93
+ diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution
94
+ Image Synthesis with Latent Diffusion Models paper. kwargs (remaining dictionary of keyword arguments, optional) β€”
95
+ Can be used to overwrite load and saveable variables (for example the pipeline components of the
96
+ specific pipeline class). The overwritten components are directly passed to the pipelines __init__
97
+ method. See example below for more information. Instantiate a AutoencoderKL from pretrained ControlNet weights saved in the original .ckpt or
98
+ .safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Make sure to pass both image_size and scaling_factor to from_single_file() if you’re loading
99
+ a VAE from SDXL or a Stable Diffusion v2 model or higher. Examples: Copied from diffusers import AutoencoderKL
100
+
101
+ url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file
102
+ model = AutoencoderKL.from_single_file(url) FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlnetMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into a ControlNetModel. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) β€”
103
+ Can be either:
104
+ A link to the .ckpt file (for example
105
+ "https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt") on the Hub.
106
+ A path to a file containing all pipeline weights.
107
+ torch_dtype (str or torch.dtype, optional) β€”
108
+ Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the
109
+ dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) β€”
110
+ Whether or not to force the (re-)download of the model weights and configuration files, overriding the
111
+ cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) β€”
112
+ Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
113
+ is not used. resume_download (bool, optional, defaults to False) β€”
114
+ Whether or not to resume downloading the model weights and configuration files. If set to False, any
115
+ incompletely downloaded files are deleted. proxies (Dict[str, str], optional) β€”
116
+ A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) β€”
117
+ Whether to only load local model weights and configuration files or not. If set to True, the model
118
+ won’t be downloaded from the Hub. token (str or bool, optional) β€”
119
+ The token to use as HTTP bearer authorization for remote files. If True, the token generated from
120
+ diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") β€”
121
+ The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
122
+ allowed by Git. use_safetensors (bool, optional, defaults to None) β€”
123
+ If set to None, the safetensors weights are downloaded if they’re available and if the
124
+ safetensors library is installed. If set to True, the model is forcibly loaded from safetensors
125
+ weights. If set to False, safetensors weights are not loaded. image_size (int, optional, defaults to 512) β€”
126
+ The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable
127
+ Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) β€”
128
+ Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) β€”
129
+ Can be used to overwrite load and saveable variables (for example the pipeline components of the
130
+ specific pipeline class). The overwritten components are directly passed to the pipelines __init__
131
+ method. See example below for more information. Instantiate a ControlNetModel from pretrained ControlNet weights saved in the original .ckpt or
132
+ .safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
133
+
134
+ url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path
135
+ model = ControlNetModel.from_single_file(url)
136
+
137
+ url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path
138
+ pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet)
scrapped_outputs/05b0f824d9e6de69327504f27e90b9e6.txt ADDED
File without changes
scrapped_outputs/05cb598c3dda9e4d07cb0d08b8e89e80.txt ADDED
File without changes
scrapped_outputs/05fc9a1b7b04cc46e3de44a240e518af.txt ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ DDIM Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10Γ— to 50Γ— faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase can be found at ermongroup/ddim. DDIMPipeline class diffusers.DDIMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) β€”
2
+ A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
3
+ A scheduler to be used in combination with unet to denoise the encoded image. Can be one of
4
+ DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
5
+ implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None eta: float = 0.0 num_inference_steps: int = 50 use_clipped_model_output: Optional = None output_type: Optional = 'pil' return_dict: bool = True ) β†’ ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) β€”
6
+ The number of images to generate. generator (torch.Generator, optional) β€”
7
+ A torch.Generator to make
8
+ generation deterministic. eta (float, optional, defaults to 0.0) β€”
9
+ Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
10
+ to the DDIMScheduler, and is ignored in other schedulers. A value of 0 corresponds to
11
+ DDIM and 1 corresponds to DDPM. num_inference_steps (int, optional, defaults to 50) β€”
12
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
13
+ expense of slower inference. use_clipped_model_output (bool, optional, defaults to None) β€”
14
+ If True or False, see documentation for DDIMScheduler.step(). If None, nothing is passed
15
+ downstream to the scheduler (use None for schedulers which don’t support this argument). output_type (str, optional, defaults to "pil") β€”
16
+ The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
17
+ Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns
18
+ ImagePipelineOutput or tuple
19
+
20
+ If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is
21
+ returned where the first element is a list with the generated images
22
+ The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDIMPipeline
23
+ >>> import PIL.Image
24
+ >>> import numpy as np
25
+
26
+ >>> # load model and scheduler
27
+ >>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom")
28
+
29
+ >>> # run pipeline in inference (sample random noise and denoise)
30
+ >>> image = pipe(eta=0.0, num_inference_steps=50)
31
+
32
+ >>> # process image to PIL
33
+ >>> image_processed = image.cpu().permute(0, 2, 3, 1)
34
+ >>> image_processed = (image_processed + 1.0) * 127.5
35
+ >>> image_processed = image_processed.numpy().astype(np.uint8)
36
+ >>> image_pil = PIL.Image.fromarray(image_processed[0])
37
+
38
+ >>> # save image
39
+ >>> image_pil.save("test.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) β€”
40
+ List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines.
scrapped_outputs/060ba29d724ef0efe0746d1279958f67.txt ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ IPNDMScheduler IPNDMScheduler is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at crowsonkb/v-diffusion-pytorch. IPNDMScheduler class diffusers.IPNDMScheduler < source > ( num_train_timesteps: int = 1000 trained_betas: Union = None ) Parameters num_train_timesteps (int, defaults to 1000) β€”
2
+ The number of diffusion steps to train the model. trained_betas (np.ndarray, optional) β€”
3
+ Pass an array of betas directly to the constructor to bypass beta_start and beta_end. A fourth-order Improved Pseudo Linear Multistep scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic
4
+ methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) β†’ torch.FloatTensor Parameters sample (torch.FloatTensor) β€”
5
+ The input sample. Returns
6
+ torch.FloatTensor
7
+
8
+ A scaled input sample.
9
+ Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
10
+ current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) β€”
11
+ The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) β€”
12
+ The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) β†’ SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β€”
13
+ The direct output from learned diffusion model. timestep (int) β€”
14
+ The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β€”
15
+ A current instance of a sample created by the diffusion process. return_dict (bool) β€”
16
+ Whether or not to return a SchedulerOutput or tuple. Returns
17
+ SchedulerOutput or tuple
18
+
19
+ If return_dict is True, SchedulerOutput is returned, otherwise a
20
+ tuple is returned where the first element is the sample tensor.
21
+ Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
22
+ the linear multistep method. It performs one forward pass multiple times to approximate the solution. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€”
23
+ Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the
24
+ denoising loop. Base class for the output of a scheduler’s step function.